Query Text
stringlengths
9
8.71k
Ranking 1
stringlengths
14
5.31k
Ranking 2
stringlengths
11
5.31k
Ranking 3
stringlengths
11
8.42k
Ranking 4
stringlengths
17
8.71k
Ranking 5
stringlengths
14
4.95k
Ranking 6
stringlengths
14
8.42k
Ranking 7
stringlengths
17
8.42k
Ranking 8
stringlengths
10
5.31k
Ranking 9
stringlengths
9
8.42k
Ranking 10
stringlengths
9
8.42k
Ranking 11
stringlengths
10
4.11k
Ranking 12
stringlengths
14
8.33k
Ranking 13
stringlengths
17
3.82k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.1
score_8
float64
0
0.02
score_9
float64
0
0
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Development of information granules of higher type and their applications to granular models of time series. The study is devoted to the design of information granules of higher type (especially type-2) with the use of the principle of justifiable granularity. The development of granules is realized in two key phases: first, information granules of type-1 are formed and then they are extended to type-2 constructs. Following the principle, information granules are designed by establishing a sound balance between their experimental justification (legitimacy) and specificity (associated with their underlying semantics). The definitions of coverage and specificity of type-2 information granules are revised to capture the essence of these constructs. Detailed formulas are derived for several main categories of membership functions (namely, triangular, parabolic, and square root) as well as intervals. The study delivers detailed results for interval-valued fuzzy sets described by membership functions coming from the main classes listed above. Illustrative studies include synthetic data exhibiting some probabilistic properties. The direct application of information granules of type-1 and type-2 is demonstrated in the description and prediction of time series realized in the setting of information granules (with the resulting models referred to as granular models of time series).
Information Granules-Based BP Neural Network for Long-Term Prediction of Time Series Long-term time series prediction is a challenging and essential task both in theory and practice. Recently, information granulation is shown to be an appropriate tool for the long-term forecast. Though some models for the long-term prediction problem have been proposed using information granulation recently, there is still a growing need to develop new prediction approaches for time series data ba...
Building the fundamentals of granular computing: A principle of justifiable granularity The study introduces and discusses a principle of justifiable granularity, which supports a coherent way of designing information granules in presence of experimental evidence (either of numerical or granular character). The term ''justifiable'' pertains to the construction of the information granule, which is formed in such a way that it is (a) highly legitimate (justified) in light of the experimental evidence, and (b) specific enough meaning it comes with a well-articulated semantics (meaning). The design process associates with a well-defined optimization problem with the two requirements of experimental justification and specificity. A series of experiments is provided as well as a number of constructs carried for various formalisms of information granules (intervals, fuzzy sets, rough sets, and shadowed sets) are discussed as well.
The Design of Free Structure Granular Mappings: The Use of the Principle of Justifiable Granularity The study introduces a concept of mappings realized in presence of information granules and offers a design framework supporting the formation of such mappings. Information granules are conceptually meaningful entities formed on a basis of a large number of experimental input–output numeric data available for the construction of the model. We develop a conceptually and algorithmically sound way of forming information granules. Considering the directional nature of the mapping to be formed, this directionality aspect needs to be taken into account when developing information granules. The property of directionality implies that while the information granules in the input space could be constructed with a great deal of flexibility, the information granules formed in the output space have to inherently relate to those built in the input space. The input space is granulated by running a clustering algorithm; for illustrative purposes, the focus here is on fuzzy clustering realized with the aid of the fuzzy C-means algorithm. The information granules in the output space are constructed with the aid of the principle of justifiable granularity (being one of the underlying fundamental conceptual pursuits of Granular Computing). The construct exhibits two important features. First, the constructed information granules are formed in the presence of information granules already constructed in the input space (and this realization is reflective of the direction of the mapping from the input to the output space). Second, the principle of justifiable granularity does not confine the realization of information granules to a single formalism such as fuzzy sets but helps form the granules expressed any required formalism of information granulation. The quality of the granular mapping (viz. the mapping realized for the information granules formed in the input and output spaces) is expressed in terms of the coverage criterion (articulating how well the experimental data are “covered” by information granules produced by the granular mapping for any input experimental data). Some parametric studies are reported by quantifying the performance of the granular mapping (expressed in terms of the coverage and specificity criteria) versus the values of a certain parameters utilized in the construction of output information granules through the principle of justifiable granularity. The plots of coverage–specificity dependency help determine a knee point and reach a sound compromise between these two conflicting requirements imposed on the quality of the granular mapping. Furthermore, quantified is the quality of the mapping with regard to the number of information granules (implying a certain granularity of the mapping). A series of experiments is reported as well.
Multivariate modeling and type-2 fuzzy sets This paper explores the link between type-2 fuzzy sets and multivariate modeling. Elements of a space X are treated as observations fuzzily associated with values in a multivariate feature space. A category or class is likewise treated as a fuzzy allocation of feature values (possibly dependent on values in X). We observe that a type-2 fuzzy set on X generated by these two fuzzy allocations captures imprecision in the class definition and imprecision in the observations. In practice many type-2 fuzzy sets are in fact generated in this way and can therefore be interpreted as the output of a classification task. We then show that an arbitrary type-2 fuzzy set can be so constructed, by taking as a feature space a set of membership functions on X. This construction presents a new perspective on the Representation Theorem of Mendel and John. The multivariate modeling underpinning the type-2 fuzzy sets can also constrain realizable forms of membership functions. Because averaging operators such as centroid and subsethood on type-2 fuzzy sets involve a search for optima over membership functions, constraining this search can make computation easier and tighten the results. We demonstrate how the construction can be used to combine representations of concepts and how it therefore provides an additional tool, alongside standard operations such as intersection and subsethood, for concept fusion and computing with words.
Building consensus in group decision making with an allocation of information granularity. Consensus is defined as a cooperative process in which a group of decision makers develops and agrees to support a decision in the best interest of the whole. It is a questioning process, more than an affirming process, in which the group members usually modify their choices until a high level of agreement within the group is achieved. Given the importance of forming an accepted decision by the entire group, the consensus problem has attained a great attention as it is a major goal in group decision making. In this study, we propose the concept of the information granularity being regarded as an important and useful asset supporting the goal to reach consensus in group decision making. By using fuzzy preference relations to represent the opinions of the decision makers, we develop a concept of a granular fuzzy preference relation where each pairwise comparison is formed as a certain information granule (say, an interval, fuzzy set, rough set, and alike) instead of a single numeric value. As being more abstract, the granular format of the preference model offers the required flexibility to increase the level of agreement within the group using the fact that we select the most suitable numeric representative of the fuzzy preference relation.
Granular representation and granular computing with fuzzy sets In this study, we introduce a concept of a granular representation of numeric membership functions of fuzzy sets, which offers a synthetic and qualitative view at fuzzy sets and their ensuing processing. The notion of consistency of the granular representation is formed, which helps regard the problem as a certain optimization task. More specifically, the consistency is referred to a certain operation @f, which gives rise to the concept of @f-consistency. Likewise introduced is a concept of granular consistency with regard to a collection of several operations, Given the essential role played by logic operators in computing with fuzzy sets, detailed investigations include and- and or-consistency as well as (and, or)-consistency of granular representations of membership functions with the logic operators implemented in the form of various t-norms and t-conorms. The optimization framework supporting the realization of the @f-consistent optimization process is provided through particle swarm optimization. Further conceptual and representation issues impacted processing fuzzy sets are discussed as well.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
Constraint logic programming for reasoning about discrete event processes The purpose of this paper is to show that constraint logic programming is a useful computational logic for modeling, simulating, and verifying real-time discrete event processes. The designer's knowledge about discrete event processes can be represented by a constraint logic program in a fashion that stays close to the mathematical definition of the processes, and can be used to semiautomate verification of possibly infinite-state systems. The constraint language CPL( R ) is used to illustrate verification techniques.
System processes are software too This talk explores the application of software engineering tools, technologies, and approaches to developing and continuously improving systems by focusing on the systems' processes. The systems addressed are those that are complex coordinations of the efforts of humans, hardware devices, and software subsystems, where humans are on the “inside”, playing critical roles in the functioning of the system and its processes. The talk suggests that in such cases, the collection of processes that use the system is tantamount to being the system itself, suggesting that improving the system's processes amounts to improving the system. Examples of systems from a variety of different domains that have been addressed and improved in this way will be presented and explored. The talk will suggest some additional untried software engineering ideas that seem promising as vehicles for supporting system development and improvement, and additional system domains that seem ripe for the application of this kind of software-based process technology. The talk will emphasize that these applications of software engineering approaches to systems has also had the desirable effect of adding to our understandings of software engineering. These understandings have created a software engineering research agenda that is complementary to, and synergistic with, agendas for applying software engineering to system development and improvement.
Software development: two approaches to animation of Z specifications using Prolog Formal methods rely on the correctness of the formal requirements specification, but this correctness cannot be proved. This paper discusses the use of software tools to assist in the validation of formal specifications and advocates a system by which Z specifications may be animated as Prolog programs. Two Z/Prolog translation strategies are explored; formal program synthesis and structure simulation. The paper explains why the former proved to be unsuccessful and describes the techniques developed for implementing the latter approach, with the aid of case studies
SADT<supscrpt>@@@@</supscrpt> /SAINT: Large scale analysis simulation methodology SADT/SAINT is a highly structured, top-down simulation methodology for defining, analyzing, communicating, and documenting large-scale systems. Structured Analysis and Design Technique (SADT), developed by SofTech, provides a functional representation and a data model of the system that is used to define and communicate the system. System Analysis of Integrated Networks of Tasks (SAINT), currently used by the USAF, is a simulation technique for designing and analyzing man-machine systems but is applicable to a wide range of systems. By linking SADT with SAINT, large-scale systems can be defined in general terms, decomposed to the necessary level of detail, translated into SAINT nomenclature, and implemented into the SAINT program. This paper describes the linking of SADT and SAINT resulting in an enhanced total simulation capability that integrates the analyst, user, and management.
Scalable Hyperspectral Image Coding Here we propose scalable Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK)-an embedded, block-based, wavelet transform coding algorithm of low complexity for hyperspectral image compression. Scalable 3D-SPECK supports both SNR and resolution progressive coding. After wavelet transform, 3D-SPECK treats each subband as a coding block. To generate SNR scalable bitstream, the stream is organized so that the same indexed bit planes are put together across coding blocks and subbands, so that the higher bit planes precede the lower ones. To generate resolution scalable bitstreams, each subband is encoded separately to generate a sub-bitstream. Rate is allocated amongst the sub-bitstream produced for each block. To decode the image sequence to a particular level at a given rate, we need to encode each subband at a higher rate so that the algorithm can truncate the sub-bitstream to the assigned rate. Resolution scalable 3D-SPECK is efficient for the application of an image server. Results show that scalable 3D-SPECK provides excellent performance on hyperspectral image compression.
MoMut::UML Model-Based Mutation Testing for UML
1.029388
0.028571
0.017959
0.014694
0.005878
0.000816
0.000272
0
0
0
0
0
0
0
A large system evaluation of SREM A comprehensive evaluation of the Software Requirements Engineering Methodology (SREM) was performed to assess its capabilities for specifying the software requirements of large, embedded computer systems and to recommend improvements which would enhance its effectiveness. Specific evaluation criteria were developed to judge the effectiveness of the methodology, its support tools and user training. The approach included attending a SREM training course and using SREM to specify the software requirements for two Air Force systems. The relatively small number of errors uncovered indicates the effectiveness of disciplined requirements analysis techniques and the capabilities of SREM for exposing subtle problems. In general, it was found that the SREM was an effective vehicle for specifying and analyzing the software requirements of large embedded computer systems, especially descriptions of real world objects, data requirements and message processing. However, deficiencies were noted in the specification language, in the “friendlinéss” of the user interfaces to the analysis and simulation tools, in the performance of these tools and in the effectiveness of the training. Appropriate improvements to all of the functional deficiencies are recommended.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Ontology-Driven Information Systems: Past, Present and Future We trace the roots of ontology-drive information systems (ODIS) back to early work in artificial intelligence and software engineering. We examine the lofty goals of the Knowledge-Based Software Assistant project from the 80s, and pose some questions. Why didn't it work? What do we have today instead? What is on the horizon? We examine two critical ideas in software engineering: raising the level of abstraction, and the use of formal methods. We examine several other key technologies and show how they paved the way for today's ODIS. We identify two companies with surprising capabilities that are on the bleeding edge of today's ODIS, and are pointing the way to a bright future. In that future, application development will be opened up to the masses, who will require no computer science background. People will create models in visual environments and the models will be the applications, self-documenting and executing as they are being built. Neither humans nor computers will be writing application code. Most functionality will be created by reusing and combining pre-coded functionality. All application software will be ontology-driven.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Safeware: system safety and computers
PARTS: a temporal logic-based real-time software specification and verification method Areas of computer application are being broadened rapidly due to the rapid improvement of the performance of computer hardware. Applications that were not feasible before are now becoming feasible with high-performance computers. This results in increased demands for computer applications that are large and have complex temporal characteristics. Most analysis methods available, however, cannot handle large, complex real-time systems adequately; They do not scale-up, lack formalism to represent complex features and perform analyses with mathematical rigor, do not support analyses from different viewpoints, or are too hard to learn and apply. TVe need analysis methods that support formal specification and verification of real-time systems. Incremental performance of specification and analysis of systems from different viewpoints (e.g., user, analyst ) must also be supported with languages appropriate for each different viewpoint and for the users involved. This paper introduces a real-time systems analysis method, named PARTS, that aims at providing above features. PARTS supports analyses from two viewpoints: external viewpoint, a view of the sys-Permission tern from the user's perspective, and internal viewpoint, a view from the developer's perspective. These viewpoints are specified using formal languages, which are: Real-Time Events Trace (RTET) for the external viewpoint, and Time Enriched Statecharts (TES) and PARTS Data Flow Diagram (PDFD) for the internal viewpoint. All PARTS languages are based on the Real-Time Temporal Logic (RTTL), and consistency of the specifications made from two different viewpoints are analyzed based on the same RTTL formalism. PARTS converts RTET and TES specifications to RTTL specifications, which are then integrated and analyzed for consistency. All of the PARTS specificaticm languages support the top-down strategy to handle complexity.
The architecture tradeoff analysis method This paper presents the Architecture Tradeoff Analysis Method (ATAM), a structured technique for understanding the tradeoffs inherent in the architectures of soft ware inten- sive systems. This method was developed to provide a prin- cipled way to evaluate a software architecture 's fitness with respect to multiple competing quality attributes: modifiabil- ity, security, performance, availability, and so forth. These attributes interact—improving one often comes at the price of worsening one or more of the others—as is shown in the paper, and the method helps us to reason about architectural decisions that affect quality attribute interactions. The ATAM is a spiral model of design: one of postulating candi- date architectures followed by analysis and risk mitigation, leading to refined architectures.
CREWS-SAVRE: Scenarios for Acquiring and Validating Requirements This paper reports research into semi-automatic generationof scenarios for validating software-intensive system requirements.The research was undertaken as part of the ESPRIT IV 21903 ‘CREWS’long-term research project. The paper presents the underlyingtheoretical models of domain knowledge, computational mechanisms anduser-driven dialogues needed for scenario generation. It describeshow CREWS draws on theoretical results from the ESPRIT III 6353‘NATURE’ basic research action, that is object system models whichare abstractions of the fundamental features of different categoriesof problem domain. CREWS uses these models to generate normal coursescenarios, then draws on theoretical and empirical research fromcognitive science, human-computer interaction, collaborative systemsand software engineering to generate alternative courses for thesescenarios. The paper describes a computational mechanism for derivinguse cases from object system models, simple rules to link actions ina use case, taxonomies of classes of exceptions which give rise toalternative courses in scenarios, and a computational mechanism forgeneration of multiple scenarios from a use case specification.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
A classification of semantic conflicts in heterogeneous database systems
A program integration algorithm that accommodates semantics-preserving transformations Given a program Base and two variants, A and B, each created by modifying separate copies of Base, the goal of program integration is to determine whether the modifications interfere, and if they do not, to create an integrated program that includes both sets of changes as well as the portions of Base preserved in both variants. Text-based integration techniques, such as the one used by the UNIX diff3 utility, are obviously unsatisfactory because one has no guarantees about how the execution behavior of the integrated program relates to the behaviors of Base, A, and B. The first program-integration algorithm to provide such guarantees was developed by Horwitz, Prins, and Reps. However, a limitation of that algorithm is that it incorporates no notion of semantics-preserving transformations. This limitation causes the algorithm to be overly conservative in its definition of interference. For example, if one variant changes the way a computation is performed (without changing the values computed) while the other variant adds code that uses the result of the computation, the algorithm would classify those changes as interfering. This paper describes a new integration algorithm that is able to accommodate semantics-preserving transformations.
Generating test cases for real-time systems from logic specifications We address the problem of automated derivation of functional test cases for real-time systems, by introducing techniques for generating test cases from formal specifications written in TRIO, a language that extends classical temporal logic to deal explicitly with time measures. We describe an interactive tool that has been built to implement these techniques, based on interpretation algorithms of the TRIO language. Several heuristic criteria are suggested to reduce drastically the size of the test cases that are generated. Experience in the use of the tool on real-life cases is reported.
The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
OPNets: an object-oriented high-level Petri net model for real-time system modeling This article describes an approach, called OPNets, for modeling real-time systems based on the object-oriented formalization of high-level Petri nets. To increase the maintainability and reusability of objects in Petri net modeling, the approach focuses on the decoupling of interobject communication knowledges and the separation of synchronization constraints from the internal structure of objects. To validate the overall system, which is composed of the hierarchically organized objects and interconnection relations, we used a two-step validation procedure that reduces the complexity and computational efforts required. As an illustration, a manufacturing cell with machining centers and robots is modeled using OPNets. The modeling experiences with OPNets demonstrate that the decoupling and separation of knowledges and constraints clearly enhances maintenance and reusability in real-time system modeling.
Visual support for reengineering work processes
Node coordination in peer-to-peer networks Peer-to-peer networks and other many-to-many relations have become popular especially for content transfer. To better understand and trust these types of networks, we need formally derived and verified models for them. Due to the large scale and heterogeneity of these networks, it may be difficult and cumbersome to create and analyse complete models. In this paper, we employ the modularisation approach of the Event-B formalism to model the separation of the functionality of each peer in a peer-to-peer network from the network structure itself, thereby working towards a distributed, formally derived and verified model of a peer-to-peer network. As coordination aspects are fundamental in the network structure, we focus our formalisation effort in this paper especially on these. The resulted approach demonstrates considerable expressivity in modelling coordination aspects in peer-to-peer networks.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.011905
0.021622
0.01243
0.011531
0.010974
0.010974
0.005976
0.004105
0.002348
0.000324
0.000008
0
0
0
Faultless Systems: Yes We Can! Gradually introducing some simple features will eventually result in a global improvement in the software development situation.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Cooperative negotiation in concurrent engineering design Design can be modeled as a cooperative multi-agent problem solving task where different agents possess different knowledge and evaluation criteria. These differences may result in inconsistent design decisions and conflicts that have to be resolved during design. The process by which resolution of inconsistencies is achieved in order to arrive at a coherent set of design decisions is negotiation. In this paper, we discuss some of the characteristics of design which make it a very challenging domain for investigating negotiation techniques. We propose a negotiation model that incorporates accessing information in existing designs, communication of design rationale and criticisms of design decisions, as well as design modifications based on constraint relaxation and comparison of utilities. The model captures the dynamic interactions of the cooperating agents during negotiations. We also present representational structures of the expertise of the various agents and a communication protocol that supports multi-agent negotiation.
Distributed Intelligent Agents In Retsina, the authors have developed a distributed collection of software agents that cooperate asynchronously to perform goal-directed information retrieval and integration for supporting a variety of decision-making tasks. Examples for everyday organizational decision making and financial portfolio management demonstrate its effectiveness.
Validating Requirements for Fault Tolerant Systems using Model Checking Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system. One was an error in the detailed requirements, and the other two were missing/ ambiguous requirements. Because the method allows validation of partial specifications, it is also an effective approach for maintaining fidelity between a co-evolving specification and an implementation.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
Designing And Building A Negotiating Automated Agent Negotiations are very important in a multiagent environment, particularly, in an environment where there are conflicts between the agents, and cooperation would be beneficial. We have developed a general structure for a Negotiating Automated Agent that consists of five modules: a Prime Minister, a Ministry of Defense, a Foreign Office, a Headquarters and Intelligence. These modules are implemented using a dynamic set of local agents belonging to the different modules. We used this structure to develop a Diplomacy player, Diplomat. Playing Diplomacy involves a certain amount of technical skills as in other board games, but the capacity to negotiate, explain, convince, promise, keep promises or break them, is an essential ingredient in good play. Diplomat was evaluated and consistently played better than human players.
A metamodel approach for the management of multiple models and the translation of schemes A metamodel approach is proposed as a framework for the definition of different data models and the management of translations of schemes from one model to another. This notion is useful in an environment for the support of the design and development of information systems, since different data models can be used and schemes referring to different models need to be exchanged. The approach is based on the observation that the constructs used in the various models can be classified into a limited set of basic types, such as lexical type, abstract type, aggregation, function. It follows that the translations of schemes can be specified on the basis of translations of the involved types of constructs: this is effectively performed by means of a procedural language and a number of predefined modules that express the standard translations between the basic constructs.
On formal aspects of electronic (or digital) commerce: examples of research issues and challenges The notion of electronic or digital commerce is gaining widespread popularity. By and large, these developments are being led by industry and government, with academic research following these trends in the form of empirical and economic research. Much more fundamental improvements to (global) commerce are possible, but are presently being overlooked for lack of adequate formal theories, representations and tools. This paper attempts to incite research in these directions.
Using the WinWin Spiral Model: A Case Study At the 1996 and 1997 International Conferences on Software Engineering, three of the six keynote addresses identified negotiation techniques as the most critical success factor in improving the outcome of software projects. The USC Center for Software Engineering has been developing a negotiation-based approach to software system requirements engineering, architecture, development, and management. This approach has three primary elements: Theory W, a management theory and approach, which says that making winners of the system's key stakeholders is a necessary and sufficient condition for project success. The WinWin spiral model, which extends the spiral software development model by adding Theory W activities to the front of each cycle. WinWin, a groupware tool that makes it easier for distributed stakeholders to negotiate mutually satisfactory (win-win) system specifications. This article describes an experimental validation of this approach, focusing on the application of the WinWin spiral model. The case study involved extending USC's Integrated Library System to access multimedia archives, including films, maps, and videos. The study showed that the WinWin spiral model is a good match for multimedia applications and is likely to be useful for other applications with similar characteristics--rapidly moving technology, many candidate approaches, little user or developer experience with similar systems, and the need for rapid completion.
STeP: Deductive-Algorithmic Verification of Reactive and Real-Time Systems . The Stanford Temporal Prover, STeP, combines deductivemethods with algorithmic techniques to verify linear-time temporal logicspecifications of reactive and real-time systems. STeP uses verificationrules, verification diagrams, automatically generated invariants, modelchecking, and a collection of decision procedures to verify finiteandinfinite-state systems.System Description: The Stanford Temporal Prover, STeP, supports thecomputer-aided formal verification of reactive, real-time...
Quantitative evaluation of software quality The study reported in this paper establishes a conceptual framework and some key initial results in the analysis of the characteristics of software quality. Its main results and conclusions are: • Explicit attention to characteristics of software quality can lead to significant savings in software life-cycle costs. • The current software state-of-the-art imposes specific limitations on our ability to automatically and quantitatively evaluate the quality of software. • A definitive hierarchy of well-defined, well-differentiated characteristics of software quality is developed. Its higher-level structure reflects the actual uses to which software quality evaluation would be put; its lower-level characteristics are closely correlated with actual software metric evaluations which can be performed. • A large number of software quality-evaluation metrics have been defined, classified, and evaluated with respect to their potential benefits, quantifiability, and ease of automation. •Particular software life-cycle activities have been identified which have significant leverage on software quality. Most importantly, we believe that the study reported in this paper provides for the first time a clear, well-defined framework for assessing the often slippery issues associated with software quality, via the consistent and mutually supportive sets of definitions, distinctions, guidelines, and experiences cited. This framework is certainly not complete, but it has been brought to a point sufficient to serve as a viable basis for future refinements and extensions.
An Approach to Fair Applicative Multiprogramming This paper presents a brief formal semantics of constructors for ordered sequences (cons) and for unordered multisets (frons) followed by a detailed operational semantics for both. A multiset is a generalization of a list structure which lacks order a priori; its order is determined by the a posteriori migration of computationally convergent elements to the front. The introductory material includes an example which demonstrates that a multiset of yet-unconverged values and a timing primitive may be used to implement the scheduler for an operating system in an applicative style. The operational semantics, given in PASCAL-like code, is described in two detailed steps: first a uniprocessor implementation of the cons/frons constructors and the first/rest probes, followed by an extension to a multiprocessor implementation. The center of either implementation is the EUREKA structure transformation, which brings convergent elements to the fore while preserving order of shared structures. The multiprocessor version is designed to run on an arbitrary number of processors with only one semaphore but makes heavy use of the sting memory store primitive. Stinging is a conditional store operation which is carried out independently of its dispatching processor so that shared nodes may be somewhat altered without interfering with other processors. An appendix presents the extension of this code to a fair implementation of multisets.
Some properties of sequential predictors for binary Markov sources Universal predictions of the next outcome of a binary sequence drawn from a Markov source with unknown parameters is considered. For a given source, the predictability is defined as the least attainable expected fraction of prediction errors. A lower bound is derived on the maximum rate at which the predictability is asymptotically approached uniformly over all sources in the Markov class. This bound is achieved by a simple majority predictor. For Bernoulli sources, bounds on the large deviations performance are investigated. A lower bound is derived for the probability that the fraction of errors will exceed the predictability by a prescribed amount Δ>0. This bound is achieved by the same predictor if Δ is sufficiently small
Program Construction by Parts . Given a specification that includes a number of user requirements,we wish to focus on the requirements in turn, and derive a partlydefined program for each; then combine all the partly defined programsinto a single program that satisfies all the requirements simultaneously.In this paper we introduces a mathematical basis for solving this problem;and we illustrate it by means of a simple example.1 Introduction and MotivationWe propose a program construction method whereby, given a...
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.20067
0.20067
0.20067
0.20067
0.20067
0.20067
0.100339
0.050301
0.020181
0.0001
0
0
0
0
Unconstrained handwritten character recognition based on fuzzy logic This paper presents an innovative approach called box method for feature extraction for the recognition of handwritten characters. In this method, the binary image of the character is partitioned into a fixed number of subimages called boxes. The features consist of vector distance (γ) from each box to a fixed point. To find γ the vector distances of all the pixels, lying in a particular box, from the fixed point are calculated and added up and normalized by the number of pixels within that box. Here, both neural networks and fuzzy logic techniques are used for recognition and recognition rates are found to be around 97 percent using neural networks and 98 percent using fuzzy logic. The methods are independent of font, size and with minor changes in preprocessing, it can be adopted for any language.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
SADT/IDEF0 for Augmenting UML, Agile and Usability Engineering Methods.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Entwicklung wissensbasierter Systeme auf der Grundlage einer ausführbaren Spezifikation
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Separating Concerns in Direct Manipulation User Interfaces Direct-manipulation user interfaces are difficult to implement as a tapered hierarchy. Features such as drag enabling and continuous graphical feedback require frequent interaction and collaboration among a large number of objects in multiple layers. These collaborations complicate the design of the interfaces in the various layers. We present a new component-interface model called a “mode component”, whose features simplify the expression of collaboration enabling and feedback across layer boundaries. We illustrate the use of mode components through a large example
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Lifting general correctness into partial correctness is ok Commands interpreted in general correctness are usually characterised by their wp and wlp predicate transformer effects. We describe a way to ascribe to such commands a single predicate transformer semantics which embodies both their wp and wlp characteristics. The new single predicate transformer describes an everywhere-terminating "lifted" computation in an ok-enriched variable space, where ok is inspired by Hoare and He's UTP but has the novelty here that it enjoys the same status as the other state variables, so that it can be manipulated directly in the lifted computation itself. The relational model of this lifted computation is not, however, simply the canonical UTP relation of the original underlying computation, since this turns out to yield too cumbersome a lifted computation to permit reasoning about efficiently with the mechanised tools available. Instead we adopt a slightly less constrained model, which we are able to show is nevertheless still effective for our purpose, and yet admits a much more efficient form of mechanised reasoning with the tools available.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Multi-layered Desires Based Framework to Detect Users' Evolving Non-functional Requirements Non-functional requirements (NFRs) play a crucial role in all the downstream activities of a software life-cycle process. Capturing newly emerged NFRs is key to software evolution. Recent research shows functional requirements in the form of task-level alternative features can be elicited from user behavioral and system contextual data through user goal inference. Considering the close connection between the concept of goal and desire, we posit that there is an opportunity to extract new NFRs based on users' mental states, particularly their desires. We propose to use a statistical model to infer desires with multiple-levels of abstraction based on contextual data under Situ framework. Our multi-layered desire inference method takes inference confidence into consideration, and tries to make sense of inference results with both high-and low-inference confidence. By utilizing the different abstraction levels of desires, we provide an illustrative example with three cases to elicit users' new NFRs including new high-level and low-level desires and new contributing relationships between them. Several implications of this work are also discussed. We plan to conduct experiments on human subjects to validate the proposed method as IRB has just approved our proposal.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Distributed Kalman filtering for sensor networks In this paper, we introduce three novel distributed Kalman filtering (DKF) algorithms for sensor networks. The first algorithm is a modification of a previous DKF algorithm presented by the author in CDC-ECC '05. The previous algo- rithm was only applicable to sensors with identical observation matrices which meant the process had to be observable by every sensor. The modified DKF algorithm uses two identical consensus filters for fusion of the sensor data and covariance information and is applicable to sensor networks with different observation matrices. This enables the sensor network to act as a collective observer for the processes occurring in an environment. Then, we introduce a continuous-time distributed Kalman filter that uses local aggregation of the sensor data but attempts to reach a consensus on estimates with other nodes in the network. This peer-to-peer distributed estimation method gives rise to two iterative distributed Kalman filtering algorithms with different consensus strategies on estimates. Communication complexity and packet-loss issues are dis- cussed. The performance and effectiveness of these distributed Kalman filtering algorithms are compared and demonstrated on a target tracking task. Index Terms—sensor networks, distributed Kalman filtering, consensus filtering, sensor fusion
Consensus-based algorithms for distributed filtering The paper addresses Distributed State Estimation (DSE) over sensor networks. Two existing consensus approaches for DSE of linear systems, named consensus on information (CI) and consensus on measurements (CM), are extended to nonlinear systems. Further, a novel hybrid consensus approach exploiting both CM and CI (named HCMCI=Hybrid CM + CI) is introduced in order to combine their complementary benefits. Novel theoretical results, limitedly to linear systems, on the guaranteed stability of the HCMCI filter under minimal requirements (i.e. collective observability and network connectivity) are proved. Finally, a simulation case-study is presented in order to comparatively show the effectiveness of the proposed consensus-based state estimators.
The extended Kalman filter as an exponential observer for nonlinear systems In this correspondence, we analyze the behavior of the extended Kalman filter as a state estimator for nonlinear deterministic systems. Using the direct method of Lyapunov, we prove that under certain conditions, the extended Kalman filter is an exponential observer, i.e., the dynamics of the estimation error is exponentially stable, Furthermore, rr-e discuss a generalization of the Kalman filter with exponential data weighting to nonlinear systems.
A scheme for robust distributed sensor fusion based on average consensus We consider a network of distributed sensors, where each sensor takes a linear measurement of some unknown parameters, corrupted by independent Gaussian noises. We propose a simple distributed iterative scheme, based on distributed average consensus in the network, to compute the maximum-likelihood estimate of the parameters. This scheme doesn't involve explicit point-to-point message passing or routing; instead, it diffuses information across the network by updating each node's data with a weighted average of its neighbors' data (they maintain the same data structure). At each step, every node can compute a local weighted least-squares estimate, which converges to the global maximum-likelihood solution. This scheme is robust to unreliable communication links. We show that it works in a network with dynamically changing topology, provided that the infinitely occurring communication graphs are jointly connected.
Distributed robust filtering with H∞ consensus of estimates The paper addresses a problem of design of distributed robust filters using the recent vector dissipativity theory. The main result is a sufficient condition which guarantees a suboptimal H"~ level of disagreement of estimates in a network of filters. It involves solving a convex optimization/feasibility problem subject to LMI constraints. The special case of balanced interconnection graphs is also considered. A gradient descent type algorithm is presented which allows the nodes to compute their estimator parameters in a decentralized manner. The proposed approach is applied to the problem of observer-based robust synchronization of a nonlinear network to an isolated node.
Diffusion Strategies for Distributed Kalman Filtering and Smoothing We study the problem of distributed Kalman filtering and smoothing, where a set of nodes is required to estimate the state of a linear dynamic system from in a collaborative manner. Our focus is on diffusion strategies, where nodes communicate with their direct neighbors only, and the information is diffused across the network through a sequence of Kalman iterations and data-aggregation. We study the problems of Kalman filtering, fixed-lag smoothing and fixed-point smoothing, and propose diffusion algorithms to solve each one of these problems. We analyze the mean and mean-square performance of the proposed algorithms, provide expressions for their steady-state mean-square performance, and analyze the convergence of the diffusion Kalman filter recursions. Finally, we apply the proposed algorithms to the problem of estimating and tracking the position of a projectile. We compare our simulation results with the theoretical expressions, and note that the proposed approach outperforms existing techniques.
Mode-Dependent Stochastic Synchronization for Markovian Coupled Neural Networks With Time-Varying Mode-Delays. This paper investigates the stochastic synchronization problem for Markovian hybrid coupled neural networks with interval time-varying mode-delays and random coupling strengths. The coupling strengths are mutually independent random variables and the coupling configuration matrices are nonsymmetric. A mode-dependent augmented Lyapunov-Krasovskii functional (LKF) is proposed, where some terms involving triple or quadruple integrals are considered, which makes the LKF matrices mode-dependent as much as possible. This gives significant improvement in the synchronization criteria, i.e., less conservative results can be obtained. In addition, by applying an extended Jensen's integral inequality and the properties of random variables, new delay-dependent synchronization criteria are derived. The obtained criteria depend not only on upper and lower bounds of mode-delays but also on mathematical expectations and variances of the random coupling strengths. Finally, two numerical examples are provided to demonstrate the feasibility of the proposed results.
New approach on robust delay-dependent H∞ control for uncertain T-S fuzzy systems with interval time-varying delay This paper investigates the robust H∞ control for Takagi-Sugeno (T-S) fuzzy systems with interval time-varying delay. By employing a new and tighter integral inequality and constructing an appropriate type of Lyapunov functional, delay-dependent stability criteria are derived for the control problem. Because neither any model transformation nor free weighting matrices are employed in our theoretical derivation, the developed stability criteria significantly improve and simplify the existing stability conditions. Also, the maximum allowable upper delay bound and controller feedback gains can be obtained simultaneously from the developed approach by solving a constrained convex optimization problem. Numerical examples are given to demonstrate the effectiveness of the proposed methods.
Stability of linear systems with general sawtooth delay It is well known that in many particular systems, the upper bound on a certain time-varying delay that preserves the stability may be higher than the corresponding bound for the constant delay. Moreover, sometimes oscillating delays improve the performance (Michiels, W., Van Assche, V. & Niculescu, S. (2005) Stabilization of time-delay systems with a controlled time-varying delays and applications...
Goal-Based Requirements Analysis Goals are a logical mechanism for identifying, organizing and justifying software requirements. Strategies are needed for the initial identification and construction of goals. In this paper we discuss goals from the perspective of two themes: goal analysis and goal evolution. We begin with an overview of the goal-based method we have developed and summarize our experiences in applying our method to a relatively large example. We illustrate some of the issues that practitioners face when using a goal-based approach to specify the requirements for a system and close the paper with a discussion of needed future research on goal-based requirements analysis and evolution. Keywords: goal identification, goal elaboration, goal refinement, scenario analysis, requirements engineering, requirements methods
The interdisciplinary study of coordination This survey characterizes an emerging research area, sometimes called coordination theory, that focuses on the interdisciplinary study of coordination. Research in this area uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology.A key insight of the framework presented here is that coordination can be seen as the process of managing dependencies among activities. Further progress, therefore, should be possible by characterizing different kinds of dependencies and identifying the coordination processes that can be used to manage them. A variety of processes are analyzed from this perspective, and commonalities across disciplines are identified. Processes analyzed include those for managing shared resources, producer/consumer relationships, simultaneity constraints, and task/subtask dependencies.Section 3 summarizes ways of applying a coordination perspective in three different domains:(1) understanding the effects of information technology on human organizations and markets, (2) designing cooperative work tools, and (3) designing distributed and parallel computer systems. In the final section, elements of a research agenda in this new area are briefly outlined.
Integrating Action Systems and Z in a Medical System Specification This paper reports on work carried out on formal specification of a computerbasedsystem that is used to train the reaction abilities of patients with severebrain damage. The system contains computer programs by which the patientscarry out different tests that are designed to stimulate their eyes and ears. Systemsof this type are new and no formal specifications for them exists to ourknowledge. The system specified here is developed together with the neurologicalclinic of a Finnish...
Addressing degraded service outcomes and exceptional modes of operation in behavioural models A dependable software system should attempt to at least partially satisfy user goals if full service provision is impossible due to an exceptional situation. In addition, a dependable system should evaluate the effects of the exceptional situation on future service provision and adjust the set of services it promises to deliver accordingly. In this paper we show how to express degraded service outcomes and exceptional modes of operation in behavioural models, i.e. use cases, activity diagrams and state charts. We also outline how to integrate the task of discovering and defining degraded outcomes and exceptional modes of operation into a requirements engineering process by presenting the relevant parts of our dependability-focused requirements engineering process DREP.
On backwards and forwards reachable sets bounding for perturbed time-delay systems Linear systems with interval time-varying delay and unknown-but-bounded disturbances are considered in this paper. We study the problem of finding outer bound of forwards reachable sets and inter bound of backwards reachable sets of the system. Firstly, two definitions on forwards and backwards reachable sets, where initial state vectors are not necessary to be equal to zero, are introduced. Then, by using the Lyapunov-Krasovskii method, two sufficient conditions for the existence of: (i) the smallest possible outer bound of forwards reachable sets; and (ii) the largest possible inter bound of backwards reachable sets, are derived. These conditions are presented in terms of linear matrix inequalities with two parameters need to tuned, which therefore can be efficiently solved by combining existing convex optimization algorithms with a two-dimensional search method to obtain optimal bounds. Lastly, the obtained results are illustrated by four numerical examples.
1.009736
0.007808
0.00757
0.007268
0.005621
0.003433
0.000003
0
0
0
0
0
0
0
Proceedings of the 2nd International Conference on Pragmatic Web, ICPW 2007, Tilburg, The Netherlands, October 22-23, 2007
Towards a semantic metrics suite for object-oriented design In recent years, much work has been performed in developing suites of metrics that are targeted for object-oriented software, rather than functionally oriented software. This is necessary since good object-oriented software has several characteristics, such as inheritance and polymorphism that are not usually present in functionally oriented software. However, all of these object-oriented metrics suites have been defined using only syntactic aspects of object-oriented software; indeed, the earlier functionally-oriented metrics were also calculated using only syntactic information. All syntactically oriented metrics have the problem that the mapping from the metric to the quality the metric purports to measure, such as the software quality factor 驴cohesion,驴 is indirect, and often arguable. Thus, a substantial amount of research effort goes into proving that these syntactically oriented metrics actually do measure their associated quality factors.This paper introduces a new suite of semantically derived object-oriented metrics, which provide a more direct mapping from the metric to its associated quality factor than is possible using syntactic metrics. These semantically derived metrics are calculated using knowledge-based, program understanding, and natural language processing techniques.
Active Knowledge Systems for the Pragmatic Web Abstract :A st he limitations of the Semantic We bb ecome apparent ,t he next step –c reatin gt he Pragmatic We b–r equires activ ek nowledg es ystems, that have the capabilit yt os upport practical an dc omplex huma ni nteraction an d communication. Ak ey ingredien ti nt his effort is as ystem’ sa bility to re- spon dt oe vents in th er eal world. Th eP ragmatic We bw,oul dt herefore not be merely ak nowledge interchang em edium; it would activel ys uppor th u- man su sing that knowledge to accomplis ht asks. Th em ai ng oa lo ft hi sp aper is to show how,an activ ek nowledg es yste mc an suppor tf ormal models of human pragmatic communication ,c ombining earlie rw or ko na ctiv ek nowl- edge systems, formal models of communication act sa nd formal models of organizationa la ctors. We carr yt hrough an extende de xample illustrating some of these ideas. 1I ntroduction
Using Issue Tracking Tools to Facilitate Student Learning of Communication Skills in Software Engineering Courses When teaching communication and teamwork skills in software engineering courses, it is often difficult to relate the theories of communication as presented in communication textbooks to actual student interactions and team activities because the majority of student interactions and team activities take place outside the classroom. Through our experience in teaching communication theories in CS456/556, a software engineering course at Ohio University, we observed that when communication theories are delivered in traditional methods such as lectures without additional exercises designed for students to apply the theories, many students tend to treat them as an independent part of the course and continue to guide their behaviors in team activities with their old habits and preexisting intuitions. We found that issue tracking tools can help facilitate student learning of communication skills by forcing students to explicitly carry out effective steps recommended by communication theories and thus improve communications among students. Moreover, issue tracking tools also improve communications between the students and the instructor, and enable the instructor to be more aware of team status, detect team problems early on, and reply less on time-consuming and often inaccurate in-class team status reports.
Making Workflow Change Acceptable Virtual professional communities are supported by network information systems composed from standard Internet tools. To satisfy the interests of all community members, a user-driven approach to requirements engineering is proposed that produces not only mean- ingful but also acceptable specifications. This approach is especially suited for workflow systems that support partially structured, evolving work processes. To ensure the acceptability, social norms must guide the specifica- tion process. The RENISYS specification method is introduced, which facilitates this process using composi- tion norms as formal representations of social norms. Conceptual graph theory is used to represent four categories of knowledge definitions: type definitions, state definitions, action norms and composition norms. It is shown how the composition norms guide the legitimate user-driven specification process by analysing a case on the development of an electronic law journal.
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
Algebraic tools for the performance evaluation of discrete event systems In this paper, it is shown that a certain class of Petri nets called event graphs can be represented as linear "time-invariant" flnite-dimensional sys- tems using some particular algebras. This sets the ground on which a theory of these systems can be developped in a manner which is very analogous to that of conventional linear system theory. Part 2 of the paper is devoted to showing some preliminary basic developments in that direction. Indeed, there are several ways in which one can consider event graphs as linear sys- tems: these ways correspond to approaches in the time domain, in the event domain and in a two-dimensional domain. In each of these approaches, a difierent algebra has to be used for models to remain linear. However, the common feature of these algebras is that they all fall into the axiomatic deflnition of "dioids". Therefore, Part 1 of the paper is devoted to a unifled presentation of basic algebraic results on dioids.
Specifying software requirements for complex systems: new techniques and their application This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.
Further Improvement of Free-Weighting Matrices Technique for Systems With Time-Varying Delay A novel method is proposed in this note for stability analysis of systems with a time-varying delay. Appropriate Lyapunov functional and augmented Lyapunov functional are introduced to establish some improved delay-dependent stability criteria. Less conservative results are obtained by considering the additional useful terms (which are ignored in previous methods) when estimating the upper bound of the derivative of Lyapunov functionals and introducing the new free-weighting matrices. The resulting criteria are extended to the stability analysis for uncertain systems with time-varying structured uncertainties and polytopic-type uncertainties. Numerical examples are given to demonstrate the effectiveness and the benefits of the proposed method
Protocol verification as a hardware design aid The role of automatic formal protocol verificationin hardware design is considered. Principlesare identified that maximize the benefits of protocolverification while minimizing the labor andcomputation required. A new protocol descriptionlanguage and verifier (both called Mur") are described,along with experiences in applying themto two industrial protocols that were developed aspart of hardware designs.1 IntroductionMost complex digital designs must be regarded as...
The Conical Methodology and the evolution of simulation model development Originating with ideas generated in the mid-1970s, the Conical Methodology (CM) is the oldest procedural approach to simulation model development. This evolutionary overview describes the principles underlying the CM, the environment structured according to these principles, and the capabilities for large complex simulation modeling tasks not provided in textbook descriptions. The CM is an object-oriented, hierarchical specification language that iteratively prescribes object attributes in a definitional phase that is topdown, followed by a specification phase that is bottom-up. The intent is to develop successive model representations at various levels of abstraction that can be diagnosed for correctness, completeness, consistency, and other characteristics prior to implementation as an executable program. Related or competitive approaches, throughout the evolutionary period are categorized as emanating from: artificial intelligence, mathematical programming, software engineering, conceptual modeling, systems theory, logic-based theory, or graph theory. Work in each category is briefly described.
The navigation toolkit The problem
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.24
0.24
0.24
0.24
0.08
0
0
0
0
0
0
0
0
0
Hierarchy of LMI conditions for the stability analysis of time-delay systems Assessing stability of time-delay systems based on the Lyapunov–Krasovskii functionals has been the subject of many contributions. Most of the results are based, first, on an a priori design of functionals and, finally, on the use of the famous Jensen’s inequality. In contrast with this design process, the present paper aims at providing a generic set of integral inequalities which are asymptotically non conservative and then to design functionals driven by these inequalities. The resulting stability conditions form a hierarchy of LMI which is competitive with the most efficient existing methods (delay-partitioning, discretization and sum of squares), in terms of conservatism and of complexity. Finally, some examples show the efficiency of the method.
Stability and Stabilization of Discrete-Time T-S Fuzzy Systems With Time-Varying Delay via Cauchy-Schwartz-Based Summation Inequality. This paper proposes new stability and stabilization conditions for discrete-time fuzzy systems with time-varying delays. By constructing a suitable Lyapunov–Krasovskii functional and introducing a new summation inequality based on the inequality of Cauchy–Schwartz form, which enhances the feasible region of the stability criterion for discrete-time systems with time-varying delay, a stability criterion for such systems is established. In order to show the effectiveness of the proposed inequality, which provides more tight lower bound of a summation term of quadratic form, a delay-dependent stability criterion for such systems is derived within the framework of linear matrix inequalities, which can be easily solved by various effective optimization algorithms. Going one step forward, the proposed inequality is applied to a stabilization problem in discrete-time fuzzy systems with time-varying delays. The advantages of the proposed stability and stabilization criteria are illustrated via two numerical examples.
Novel Lyapunov-Krasovskii functional with delay-dependent matrix for stability of time-varying delay systems. This paper investigates the stability criteria of time-varying delay systems with known bounds of the delay and its derivative. To obtain a tighter bound of integral term, quadratic generalized free-weighting matrix inequality (QGFMI) is proposed. Furthermore, a novel augmented LyapunovKrasovskii functional (LKF) are constructed with a delay-dependent matrix, which impose the information for a bound of delay derivative. Relaxed stability condition using QGFMI and LKF provides a larger delay bound with low computational burden. The superiority of the proposed stability condition is verified by comparing to recent results.
Multiple-integral inequalities to stability analysis of linear time-delay systems. This paper is concerned with the stability analysis of linear systems with constant delay. First, with the help of Schmidt orthogonalization, we define a new set of orthogonal polynomials. By using the orthogonal polynomial set, we propose a novel multiple-integral inequality, which can achieve less conservatism than many existing inequalities, such as Jensen׳s single-integral inequality, Jensen׳s double-integral inequality, the Wirtinger-based single-integral inequality and the auxiliary function-based double-integral inequality. Then, based on the proposed inequality, we derive a stability criterion for the system under consideration, which is less conservative than the existing ones. Finally, we provide a numerical example to illustrate the effectiveness of the derived criterion.
Stability analysis and stabilization for fuzzy hyperbolic time-delay system based on delay partitioning approach. This paper investigates the problems of stability analysis and stabilization for a class of fuzzy hyperbolic time-delay systems. A generalization of the fuzzy hyperbolic time-delay model is firstly proposed, which is more effective in representing nonlinear control systems. By means of the delay-partitioning method, a novel basis-dependent Lyapunov-Krasovskii function is constructed to reduce the conservatism of stability conditions. Those conditions are converted to finite linear matrix inequalities, which can be readily solved by standard numerical software. Using this result, the problem of stabilization is also solved. Then, both the stability and stabilization results are further extended to fuzzy hyperbolic time-delay systems with parameter uncertainties. Finally, three illustrative examples are provided to demonstrate the feasibility and effectiveness of the proposed methods.
Delay-Dependent Robust Stability Criteria For Singular Time-Delay Systems By Delay-Partitioning Approach In this paper, the problem of delay-dependent robust stability for singular time-delay systems is investigated. The parametric uncertainties are assumed to be norm bounded. Based on the idea of delay partitioning, A new Lyapunov-Krasovskii functional is proposed to develop new delay-dependent criteria, that ensures the considered system to be regular, impulse free and stable in terms of linear matrix inequalities. Furthermore, the Wirtinger-based integral inequality approach has been employed to derive less conservative results. Finally, some numerical examples are provided to demonstrate the effectiveness of the obtained results and for comparison with previous works.
Affine Bessel-Legendre inequality: Application to stability analysis for systems with time-varying delays. Recently, some novel inequalities have been proposed such as the auxiliary function-based integral inequality and the Bessel–Legendre inequality which can be obtained from the former by choosing Legendre polynomials as auxiliary functions. These inequalities have been successfully applied to systems with constant delays but there have been some difficulties in application to systems with time-varying delays since the resulting bounds contain the reciprocal convexity which may not be tractable as it is. This paper proposes an equivalent form of the Bessel–Legendre inequality, which has the advantage of being easily applied to systems with time-varying delays without the reciprocal convexity.
Stability of Recurrent Neural Networks With Time-Varying Delay via Flexible Terminal Method. This brief is concerned with the stability criteria for recurrent neural networks with time-varying delay. First, based on convex combination technique, a delay interval with fixed terminals is changed into the one with flexible terminals, which is called flexible terminal method (FTM). Second, based on the FTM, a novel Lyapunov-Krasovskii functional is constructed, in which the integral interval ...
Some novel approaches on state estimation of delayed neural networks. This paper studies the issue of state estimation for a class of neural networks (NNs) with time-varying delay. A novel Lyapunov-Krasovskii functional (LKF) is constructed, where triple integral terms are used and a secondary delay-partition approach (SDPA) is employed. Compared with the existing delay-partition approaches, the proposed approach can exploit more information on the time-delay intervals. By taking full advantage of a modified Wirtinger's integral inequality (MWII), improved delay-dependent stability criteria are derived, which guarantee the existence of desired state estimator for delayed neural networks (DNNs). A better estimator gain matrix is obtained in terms of the solution of linear matrix inequalities (LMIs). In addition, a new activation function dividing method is developed by bringing in some adjustable parameters. Three numerical examples with simulations are presented to demonstrate the effectiveness and merits of the proposed methods.
Conditions for stability of the extended Kalman filter and their application to the frequency tracking problem The error dynamics of the extended Kalman filter (EKF), employed as an observer for a general nonlinear, stochastic discrete time system, are analyzed. Sufficient conditions for the boundedness of the errors of the EKF are determined. An expression for the bound on the errors is given in terms of the size of the nonlinearities of the system and the error covariance matrices used in the design of the EKF. The results are applied to the design of a stable EKF frequency tracker for a signal with time-varying frequency.
Stepwise Refinement of Control Software - A Case Study Using RAISE We develop a control program for a realistic automation problem by stepwise refinement. We focus on exemplifying appropriate levels of abstraction for the refinement steps. By using phases as a means for abstraction, safety requirements are specified on a high level of abstraction and can be verified using process algebra. The case study is carried out using the RAISE specification language, and we report on some experiences using the RAISE tool set.
Stored data structures on the Manchester dataflow machine Experience with the Manchester Dataflow Machine has highlighted the importance of efficient handling of stored data structures in a practical parallel machine. It has proved necessary to add a special-purpose structure store to the machine, and this paper describes the role of this structure store and the software which uses it. Some key issues in data structure handling for parallel machines are raised.
On denoising and compression of DNA microarray images The annotation of proteins can be achieved by classifying the protein of interest into a certain known protein family to induce its functional and structural features. This paper presents a new method for classifying protein sequences based upon the ...
Graph library design We present an object oriented design for graph libraries that implements a dynamic typing of graphs. With this design, we can specify pre and post-conditions on graph algorithms, describe safe polymorphic algorithms on graphs and specify operations specific to types of graphs, while presenting performance and allowing extensibility
1.004099
0.004862
0.004408
0.004387
0.004136
0.003704
0.002204
0.001362
0.000641
0.000003
0
0
0
0
Local Synchronization Criteria of Markovian Nonlinearly Coupled Neural Networks With Uncertain and Partially Unknown Transition Rates. In this paper, the local synchronization problem of Markovian nonlinearly coupled neural networks with uncertain and partially unknown transition rates is investigated. Each transition rate in this Markovian nonlinearly coupled neural networks model is uncertain or completely unknown because the complete knowledge on the transition rates is difficult and the cost is probably high. By applying the Lyapunov-Krasovskii functional, a new integral inequality combining with free-matrix-based integral inequality and further improved integral inequality, the less conservative local synchronization criteria are obtained. The new delay-dependent local synchronization criteria containing the bounds of delay and delay derivative are given in terms of linear matrix inequalities. Finally, a simulation example is provided to illustrate the effectiveness of the proposed method.
Stability and Stabilization of Takagi-Sugeno Fuzzy Systems via Sampled-Data and State Quantized Controller. In this paper, we investigate the problem of stability and stabilization for sampled-data fuzzy systems with state quantization. By using an input delay approach, the sampled-data fuzzy systems with state quantization are transformed into a continuous-time system with a delay in the state. The transformed system contains nondifferentiable time-varying state delay. Based on some integral techniques...
Nonfragile Exponential Synchronization of Delayed Complex Dynamical Networks With Memory Sampled-Data Control. This paper considers nonfragile exponential synchronization for complex dynamical networks (CDNs) with time-varying coupling delay. The sampled-data feedback control, which is assumed to allow norm-bounded uncertainty and involves a constant signal transmission delay, is constructed for the first time in this paper. By constructing a suitable augmented Lyapunov function, and with the help of intro...
Improved delay-range-dependent stability criteria for linear systems with time-varying delays This paper is concerned with the stability analysis of linear systems with time-varying delays in a given range. A new type of augmented Lyapunov functional is proposed which contains some triple-integral terms. In the proposed Lyapunov functional, the information on the lower bound of the delay is fully exploited. Some new stability criteria are derived in terms of linear matrix inequalities without introducing any free-weighting matrices. Numerical examples are given to illustrate the effectiveness of the proposed method.
Convex Dwell-Time Characterizations for Uncertain Linear Impulsive Systems New sufficient conditions for the characterization of dwell-times for linear impulsive systems are proposed and shown to coincide with continuous decrease conditions of a certain class of looped-functionals, a recently introduced type of functionals suitable for the analysis of hybrid systems. This approach allows to consider Lyapunov functions that evolve nonmonotonically along the flow of the system in a new way, broadening then the admissible class of systems which may be analyzed. As a byproduct, the particular structure of the obtained conditions makes the method is easily extendable to uncertain systems by exploiting some convexity properties. Several examples illustrate the approach.
An improved delay-partitioning approach to stability criteria for generalized neural networks with interval time-varying delays. This paper deals with the problem of stability analysis for generalized delayed neural networks with interval time-varying delays based on the delay-partitioning approach. By constructing a suitable Lyapunov–Krasovskii functional with triple- and four-integral terms and using Jensen’s inequality, Wirtinger-based single- and double-integral inequality technique and linear matrix inequalities (LMIs), which guarantees asymptotic stability of addressed neural networks. This LMI can be easily solved via convex optimization algorithm. The novelty of this paper is that the consideration of a new integral inequalities and Lyapunov–Krasovskii functional is shown to be less conservatism, and it takes fully the relationship between the terms in the Leibniz–Newton formula within the framework of LMIs. Moreover, it is assumed that the lower bound of the time-varying delay is not restricted to be zero. Finally, several interesting numerical examples are given to demonstrate the effectiveness and less conservativeness of our theoretical results over well-known examples existing in recent literature.
Extended dissipative analysis of generalized Markovian switching neural networks with two delay components. The topic of delay-dependent extended dissipative analysis for generalized Markovian switching neural networks (GMSNNs) with two delay components is considered in this paper. Based on the concept of the extended dissipativity, this paper is to solve the H∞, L2−L∞, passive and (Q, S, R)- dissipativity performance in a unified framework. By means of an augmented Lyapunov–Krasovskii functional (LKF) as well as employing the novel free-matrix-based inequality and the reciprocally convex approach, some improved delay-dependent criteria are established in terms of linear matrix inequalities (LMIs). Moreover, the obtained criteria are extended to analyze the extended dissipative analysis of generalized neural networks (GNNs) with two delay components. Numerical examples are shown to illustrate the effectiveness of the methods.
New stability criteria for linear systems with interval time-varying delay This paper investigates robust stability of uncertain linear systems with interval time-varying delay. The time-varying delay is assumed to belong to an interval and is a fast time-varying function. The uncertainty under consideration includes polytopic-type uncertainty and linear fractional norm-bounded uncertainty. A new Lyapunov–Krasovskii functional, which makes use of the information of both the lower and upper bounds of the interval time-varying delay, is proposed to drive some new delay-dependent stability criteria. In order to obtain much less conservative results, a tighter bounding for some term is estimated. Moreover, no redundant matrix variable is introduced. Finally, three numerical examples are given to show the effectiveness of the proposed stability criteria.
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
A singleton failures semantics for Communicating Sequential Processes This paper defines a new denotational semantics for the language of Communicating Sequential Processes (CSP). The semantics lies between the existing traces and failures models of CSP, providing a treatment of non-determinism in terms of singleton failures. Although the semantics does not represent a congruence upon the full language, it is adequate for sequential tests of non-deterministic processes. This semantics corresponds exactly to a commonly used notion of data refinement in Z and Object-Z: an abstract data type is refined when the corresponding process is refined in terms of singleton failures. The semantics is used to explore the relationship between data refinement and process refinement, and to derive a rule for data refinement that is both sound and complete.
A taxonomy for real-world modelling concepts A major component in problem analysis is to model the real world itself. However, the modelling languages suggested so far, suffer from several weaknesses, especially with respect to dynamics . First, dynamic modelling languages originally aimed at describing data—rather than real-world—processes. Moreover, they are either weak in expression, so that models become too vague to be meaningful, or they are cluttered with rigorous detail, which makes modelling unnecessarily complicated and inhibits the communication with end users. This paper establishes a simple and intuitive conceptual basis for the modelling of the real world, with an emphasis on dynamics. Object-orientation is not considered appropriate for this purpose, due to its focus on static object structure. Dataflow diagrams, on the other hand, emphasize dynamics, but unfortunately, some major conceptual deficiencies make DFDs, as well as their various formal extensions, unsuited for real-world modelling. This paper presents a taxonomy of concepts for real-world modelling which rely on some seemingly small, but essential modifications of the DFD language, Hence the well-known, communication-oriented diagrammatic representations of DFDs can be retained. It is indicated how the approach can support a smooth transition into later stages of object-oriented design and implementation.
Active Knowledge Systems for the Pragmatic Web Abstract :A st he limitations of the Semantic We bb ecome apparent ,t he next step –c reatin gt he Pragmatic We b–r equires activ ek nowledg es ystems, that have the capabilit yt os upport practical an dc omplex huma ni nteraction an d communication. Ak ey ingredien ti nt his effort is as ystem’ sa bility to re- spon dt oe vents in th er eal world. Th eP ragmatic We bw,oul dt herefore not be merely ak nowledge interchang em edium; it would activel ys uppor th u- man su sing that knowledge to accomplis ht asks. Th em ai ng oa lo ft hi sp aper is to show how,an activ ek nowledg es yste mc an suppor tf ormal models of human pragmatic communication ,c ombining earlie rw or ko na ctiv ek nowl- edge systems, formal models of communication act sa nd formal models of organizationa la ctors. We carr yt hrough an extende de xample illustrating some of these ideas. 1I ntroduction
Addressing degraded service outcomes and exceptional modes of operation in behavioural models A dependable software system should attempt to at least partially satisfy user goals if full service provision is impossible due to an exceptional situation. In addition, a dependable system should evaluate the effects of the exceptional situation on future service provision and adjust the set of services it promises to deliver accordingly. In this paper we show how to express degraded service outcomes and exceptional modes of operation in behavioural models, i.e. use cases, activity diagrams and state charts. We also outline how to integrate the task of discovering and defining degraded outcomes and exceptional modes of operation into a requirements engineering process by presenting the relevant parts of our dependability-focused requirements engineering process DREP.
Dissipativity analysis of neural networks with time-varying delays This paper focuses on the problem of delay-dependent dissipativity analysis for a class of neural networks with time-varying delays. A free-matrix-based inequality method is developed by introducing a set of slack variables, which can be optimized via existing convex optimization algorithms. Then, by employing Lyapunov functional approach, sufficient conditions are derived to guarantee that the considered neural networks are strictly ( Q , S , R ) -γ-dissipative. The conditions are presented in terms of linear matrix inequalities and can be readily checked and solved. Numerical examples are finally provided to demonstrate the effectiveness and advantages of the proposed new design techniques.
1.221
0.031571
0.024556
0.00714
0.001186
0.00035
0.000167
0.000042
0
0
0
0
0
0
InterPreTS: protein interaction prediction through tertiary structure. InterPreTS (Interaction Prediction through Tertiary Structure) is a web-based version of our method for predicting protein-protein interactions (Aloy and Russell, 2002, Proc. Natl Acad. Sci. USA, 99, 5896-5901). Given a pair of query sequences, we first search for homologues in a database of interacting domains (DBID) of known three-dimensional complex structures. Pairs of sequences homologous to a known interacting pair are scored for how well they preserve the atomic contacts at the interaction interface. InterPreTS includes a useful interface for visualising molecular details of any predicted interaction.
UniProt Knowledgebase: a hub of integrated protein data. The UniProt Knowledgebase (UniProtKB) acts as a central hub of protein knowledge by providing a unified view of protein sequence and functional information. Manual and automatic annotation procedures are used to add data directly to the database while extensive cross-referencing to more than 120 external databases provides access to additional relevant information in more specialized data collections. UniProtKB also integrates a range of data from other resources. All information is attributed to its original source, allowing users to trace the provenance of all data. The UniProt Consortium is committed to using and promoting common data exchange formats and technologies, and UniProtKB data is made freely available in a range of formats to facilitate integration with other databases.
A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Motivation: When running experiments that involve multiple high density oligonucleotide arrays, it is important to remove sources of variation between arrays of non-biological origin. Normalization is a process for reducing this variation. It is common to see non-linear relations between arrays and the standard normalization provided by Affymetrix does not perform well in these situations. Results: We present three methods of performing normalization at the probe intensity level. These methods are called complete data methods because they make use of data from all arrays in an experiment to form the normalizing relation. These algorithms are compared to two methods that make use of a baseline array: a one number scaling based algorithm and a method that uses a non-linear normalizing relation by comparing the variability and bias of an expression measure. Two publicly available datasets are used to carry out the comparisons. The simplest and quickest complete data method is found to perform favorably. Availablity: Software implementing all three of the complete data normalization methods is available as part of the R package Affy, which is a part of the Bioconductor project http://www.bioconductor.org. Contact: [email protected] Supplementary information: Additional figures may be found at http://www.stat.berkeley.edu/similar tobolstad/normalize/ index.html.
A comparison of background correction methods for two-colour microarrays Motivation: Microarray data must be background corrected to remove the effects of non-specific binding or spatial heterogeneity across the array, but this practice typically causes other problems such as negative corrected intensities and high variability of low intensity log-ratios. Different estimators of background, and various model-based processing methods, are compared in this study in search of the best option for differential expression analyses of small microarray experiments. Results: Using data where some independent truth in gene expression is known, eight different background correction alternatives are compared, in terms of precision and bias of the resulting gene expression measures, and in terms of their ability to detect differentially expressed genes as judged by two popular algorithms, SAM and limma eBayes. A new background processing method (normexp) is introduced which is based on a convolution model. The model-based correction methods are shown to be markedly superior to the usual practice of subtracting local background estimates. Methods which stabilize the variances of the log-ratios along the intensity range perform the best. The normexp+offset method is found to give the lowest false discovery rate overall, followed by morph and vsn. Like vsn, normexp is applicable to most types of two-colour microarray data. Availability: The background correction methods compared in this article are available in the R package limma (Smyth, 2005) from http://www.bioconductor.org. Contact: [email protected] Supplementary information: Supplementary data are available from http://bioinf.wehi.edu.au/resources/webReferences.html.
SAPIN: a framework for the structural analysis of protein interaction networks. Protein interaction networks are widely used to depict the relationships between proteins. These networks often lack the information on physical binary interactions, and they do not inform whether there is incompatibility of structure between binding partners. Here, we introduce SAPIN, a framework dedicated to the structural analysis of protein interaction networks. SAPIN first identifies the protein parts that could be involved in the interaction and provides template structures. Next, SAPIN performs structural superimpositions to identify compatible and mutually exclusive interactions. Finally, the results are displayed using Cytoscape Web.
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Feedback stabilization of some event graph models The authors introduce several notions of stability for event graph models, timed or not. The stability is similar to the boundedness notion for Petri nets. The event graph models can be controlled by an output feedback which takes information from some observable transitions and can disable some controllable transitions. The controller itself is composed of an event graph. In this framework the authors solve the corresponding stabilization problems, i.e., they wonder if such a controller may prevent the explosion of the number of tokens
Automated consistency checking of requirements specifications This article describes a formal analysis technique, called consistency checking, for automatic detection of errors, such as type errors, nondeterminism, missing cases, and circular definitions, in requirements specifications. The technique is designed to analyze requirements specifications expressed in the SCR (Software Cost Reduction) tabular notation. As background, the SCR approach to specifying requirements is reviewed. To provide a formal semantics for the SCR notation and a foundation for consistency checking, a formal requirements model is introduced; the model represents a software system as a finite-state automation which produces externally visible outputs in response to changes in monitored environmental quantities. Results of two experiments are presented which evaluated the utility and scalability of our technique for consistency checking in real-world avionics application. The role of consistency checking during the requirements phase of software development is discussed.
Fuzzy identification of systems and its application to modeling and control
Database design with common sense business reasoning and learning Automated database design systems embody knowledge about the database design process. However, their lack of knowledge about the domains for which databases are being developed significantly limits their usefulness. A methodology for acquiring and using general world knowledge about business for database design has been developed and implemented in a system called the Common Sense Business Reasoner, which acquires facts about application domains and organizes them into a a hierarchical, context-dependent knowledge base. This knowledge is used to make intelligent suggestions to a user about the entities, attributes, and relationships to include in a database design. A distance function approach is employed for integrating specific facts, obtained from individual design sessions, into the knowledge base (learning) and for applying the knowledge to subsequent design problems (reasoning).
An Operational Approach to Requirements Specification for Embedded Systems The approach to requirements specification for embedded systems described in this paper is called "operational" because a requirements specification is an executable model of the proposed system interacting with its environment. The approach is embodied by the language PAISLey, which is motivated and defined herein. Embedded systems are characterized by asynchronous parallelism, even at the requirements level; PAISLey specifications are constructed by interacting processes so that this can be represented directly. Embedded systems are also characterized by urgent performance requirements, and PAISLey offers a formal, but intuitive, treatment of performance.
Refinement calculus, part I: sequential nondeterministic programs A lattice theoretic framework for the calculus of program refinement is presented. Specifications and program statements are combined into a single (infinitary) language of commands which permits miraculous, angelic and demonic statements to be used in the description of program behavior. The weakest precondition calculus is extended to cover this larger class of statements and a game-theoretic interpretation is given for these constructs. The language is complete, in the sense that every monotonic predicate transformer can be expressed in it. The usual program constructs can be defined as derived notions in this language. The notion of inverse statements is defined and its use in formalizing the notion of data refinement is shown.
Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.084444
0.066667
0.066667
0.066667
0.066667
0
0
0
0
0
0
0
0
0
On the performance of UML state machine interpretation at runtime Modelling system behaviour by means of UML Behavioral State Machines is an established practice in software engineering. Usually, code generation is employed to create a system's software components. Although this approach yields software with a good runtime performance, the resulting system behaviour is static. Changes to the behaviour model necessarily provoke an iteration in the code generation workflow and a re-deployment of the generated artefacts. In the area of autonomic systems engineering, it is assumed that systems are able to adapt their runtime behaviour in response to a changing context. Thus, the constraints imposed by a code generation approach make runtime adaptation difficult, if not impossible. This article investigates a solution to this problem by employing interpretation techniques for the runtime execution of UML State Machines, enabling the adaptability of a system's runtime behaviour on the level of single model elements. This is done by devising concepts for behaviour model interpretation, which are then used in a proof-of-concept implementation to demonstrate the feasibility of the approach. For a quantitative evaluation we provide a performance comparison between the proof-of-concept implementation and generated code for a number of benchmark models. We find that UML State Machine interpretation has a performance overhead when compared with static code generation, but found it to be adequate for the majority of situations, except when dealing with high-throughput or delay-sensitive data.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An integrated software development environment with XML internal representation The processes of software engineering and the industries themselves tend to be fluctuant. There are numerous factors, such as software methodologies, technologies, supporting tools, and process managements, which may significantly affect the strategies and activities of a software development process. We improve the PIE approach and propose an XML-based meta-model for process and agent-based integrated software development environment (PRAISE). PRAISE includes both the external representation in UML and its internal representation in XML, and can be used to support the integration of software development in a global aspect.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Model-Driven Development in Robotics Domain: A Systematic Literature Review Robots are complex agents composed of various sensors and actuators that work together with software to meet specific requirements. The subset of robots that has the ability to interact among them and even with people, through gestures or speaking, is known as Social Robots. Model-Driven Development is a promising paradigm because it promotes the reuse of components and quick code generation with quality. ModelDriven Development has been widely used in the context of Robotics in order to reduce complexity, reduce development effort and reuse of software. Due to these facts, it becomes pertinent the development of a systematic literature review to compile these results. In this paper we investigate how MDD techniques have helped the field of Robotics, therefore a systematic literature review was conducted seeking to identify approaches and their main technical features, as well as the types of specific requirements, behavioral and social issues. We came to conclusion that the existing approaches provide many interesting capabilities, typically by using the component-based development paradigm seeking a higher level of software reuse and facilitating the implementation of systems.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Stability of Linear Systems With Time-Varying Delays Using Bessel–Legendre Inequalities This paper addresses the stability problem of linear systems with a time-varying delay. Hierarchical stability conditions based on linear matrix inequalities are obtained from an extensive use of the Bessel inequality applied to Legendre polynomials of arbitrary orders. While this inequality has been only used for constant discrete and distributed delays, this paper generalizes the same methodology to time-varying delays. We take advantages of the dependence of the stability criteria on both the delay and its derivative to propose a new definition of allowable delay sets. A light and smart modification in their definition leads to relevant conclusions on the numerical results.
Further results on passivity analysis for uncertain neural networks with discrete and distributed delays. The problem of passivity analysis of uncertain neural networks (UNNs) with discrete and distributed delay is considered. By constructing a suitable augmented Lyapunov-Krasovskii functional(LKF) and combing a novel integral inequality with convex approach to estimate the derivative of the proposed LKF, improved sufficient conditions to guarantee passivity of the concerned neural networks are established with the framework of linear matrix inequalities(LMIs), which can be solved easily by various efficient convex optimization algorithms. Two numerical examples are provided to demonstrate the enhancement of feasible region of the proposed criteria by the comparison of maximum allowable delay bounds.
New absolute stability criteria for uncertain Lur'e systems with time-varying delays. This paper deals with absolute stability of uncertain Lur’e systems with time-varying delay. By introducing a Lyapunov–Krasovskii functional related to a second-order Bessel–Legendre inequality, some absolute stability criteria are derived for the system under study. Different from some existing approaches, a remarkable feature of this paper is that the time-derivative of the Lyapunov–Krasovskii functional is estimated by a linear function rather than a quadratic function on the time-varying delay, thanks to the introduction of four extra vectors. As a result, the resulting absolute stability criteria are of less conservatism than some existing ones, which is demonstrated through three examples.
Affine Bessel-Legendre inequality: Application to stability analysis for systems with time-varying delays. Recently, some novel inequalities have been proposed such as the auxiliary function-based integral inequality and the Bessel–Legendre inequality which can be obtained from the former by choosing Legendre polynomials as auxiliary functions. These inequalities have been successfully applied to systems with constant delays but there have been some difficulties in application to systems with time-varying delays since the resulting bounds contain the reciprocal convexity which may not be tractable as it is. This paper proposes an equivalent form of the Bessel–Legendre inequality, which has the advantage of being easily applied to systems with time-varying delays without the reciprocal convexity.
Stochastic stability for distributed delay neural networks via augmented Lyapunov-Krasovskii functionals. This paper is concerned with the analysis problem for the globally asymptotic stability of a class of stochastic neural networks with finite or infinite distributed delays. By using the delay decomposition idea, a novel augmented Lyapunov–Krasovskii functional containing double and triple integral terms is constructed, based on which and in combination with the Jensen integral inequalities, a less conservative stability condition is established for stochastic neural networks with infinite distributed delay by means of linear matrix inequalities. As for stochastic neural networks with finite distributed delay, the Wirtinger-based integral inequality is further introduced, together with the augmented Lyapunov–Krasovskii functional, to obtain a more effective stability condition. Finally, several numerical examples demonstrate that our proposed conditions improve typical existing ones.
Dissipativity analysis for neural networks with two-delay components using an extended reciprocally convex matrix inequality. This paper focuses on the problem of strictly (Q,S,R)-γ-dissipativity analysis for neural networks with two-delay components. Based on the dynamic delay interval method, a Lyapunov–Krasovskii functional is constructed. By solving its self-positive definite and derivative negative definite conditions via an extended reciprocally convex matrix inequality, several new sufficient conditions that guarantee the neural networks strictly (Q,S,R)-γ-dissipative are derived. Furthermore, the dissipativity analysis of neural networks with two-delay components is extended to the stability analysis. Finally, two numerical examples are employed to illustrate the advantages of the proposed method.
Single/Multiple Integral Inequalities With Applications to Stability Analysis of Time-Delay Systems. This technical note is concerned with the problem of stability analysis for time-delay systems. A new series of integral inequalities to bound a single integral term is presented by introducing some free matrices, which produces tighter bounds than some existing ones. Similarly, based on orthogonal polynomials defined in integral inner spaces, new series of multiple integral inequalities are presented as well, which include the existing double ones. To show the effectiveness of the proposed inequalities, their applications to stability analysis of systems with discrete and distributed delays are provided with numerical examples.
Stability analysis of time-delay systems via free-matrix-based double integral inequality. Based on the free-weighting matrix and integral-inequality methods, a free-matrix-based double integral inequality is proposed in this paper, which includes the Wirtinger-based double integral inequality as a special case. By introducing some free matrices into the inequality, more freedom can be provided in bounding the quadratic double integral. The connection of the new integral inequality and Wirtinger-based double one is well described, which gives a sufficient condition for the application of the new inequality to be less conservative. Furthermore, to investigate the effectiveness of the proposed inequality, a new delay-dependent stability criterion is derived in terms of linear matrix inequalities. Numerical examples are given to demonstrate the advantages of the proposed method.
Robust Stabilization for Uncertain Saturated Time-Delay Systems: A Distributed-Delay-Dependent Polytopic Approach. This technical note investigates the robust stabilization problem for uncertain linear systems with discrete and distributed delays under saturated state feedback. Different from the existing approaches, a distributed-delay-dependent polytopic approach is proposed in this technical note, and the saturation nonlinearity is represented as the convex combination of state feedback and auxiliary distributed-delay feedback. Then, by incorporating an appropriate augmented Lyapunov-Krasovskii (L-K) functional and some integral inequalities, the less conservative stabilization and robust stabilization conditions are proposed in terms of linear matrix inequalities (LMIs). The effectiveness and reduced conservatism of the proposed conditions are illustrated by numerical examples.
Distributed Linear Estimation Over Sensor Networks We consider a network of sensors in which each node may collect noisy linear measurements of some unknown parameter. In this context, we study a distributed consensus diffusion scheme that relies only on bidirectional communication among neighbour nodes (nodes that can communicate and exchange data), and allows every node to compute an estimate of the unknown parameter that asymptotically converges to the true parameter. At each time iteration, a measurement update and a spatial diffusion phase are performed across the network, and a local least-squares estimate is computed at each node. The proposed scheme allows one to consider networks with dynamically changing communication topology, and it is robust to unreliable communication links and failures in measuring nodes. We show that under suitable hypotheses all the local estimates converge to the true parameter value.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
Plan Abstraction Based on Operator Generalization We describe a planning system which automatically creates abstract operators while organizing a given set of primitive operators into a taxonomic hierarchy. At the same time, the system creates categories of abstract object types which allow abstract operators to apply to broad classes of functionally similar ob- jects. After the system has found a plan to achieve a particular goal, it replaces each primitive operator in the plan with one of its ancestors from the operator taxonomy. The resulting abstract plan is incorpo- rated into the operator hierarchy as a new abstract operator, an abstract-macro. The next time the plan- ner is faced with a similar task, it can specialize the abstract-macro into a suitable plan by again using the operator taxonomy, this time replacing the abstract operators with appropriate descendants.
A comparison of multiprocessor task scheduling algorithms with communication costs Both parallel and distributed network environment systems play a vital role in the improvement of high performance computing. Of primary concern when analyzing these systems is multiprocessor task scheduling. Therefore, this paper addresses the challenge of multiprocessor task scheduling parallel programs, represented as directed acyclic task graph (DAG), for execution on multiprocessors with communication costs. Moreover, we investigate an alternative paradigm, where genetic algorithms (GAs) have recently received much attention, which is a class of robust stochastic search algorithms for various combinatorial optimization problems. We design the new encoding mechanism with a multi-functional chromosome that uses the priority representation-the so-called priority-based multi-chromosome (PMC). PMC can efficiently represent a task schedule and assign tasks to processors. The proposed priority-based GA has show effective performance in various parallel environments for scheduling methods.
Trend analysis for flaming of SNS SNS is widely used as a convenient communication tool. In addition, the effect measurement of campaigns, social prediction and trend analysis using the SNS has also been proposed in several researches. However due to inappropriate posts in SNS, a phenomenon that avalanche of criticisms against the poster is occurring frequently. This phenomenon is called “Flaming”. Since this pursues excessive social responsibility to not only the poster but to organizations related to the poster, flaming is becoming an important social problem in Japan. In this paper, we analyze the trend of the flaming based on past cases toward for protecting the flaming.
1.010712
0.012667
0.012
0.0108
0.01
0.007556
0.004761
0.001831
0.000335
0.000001
0
0
0
0
Multiple-vector user profiles in support of knowledge sharing This paper describes an algorithm to automatically construct expertise profiles for company employees, based on documents authored and read by them. A profile consists of a series of high dimensional vectors, each describing an expertise domain, and provides a hierarchy between these vectors, enabling a structured view on an employee's expertise. The algorithm is novel in providing this layered view, as well as in its high degree of automation and its generic approach ensuring applicability in an industrial setting. The profiles provide support for several knowledge management functionalities that are difficult or impossible to achieve using existing methods. This paper in particular presents the initialization of communities of practice, bringing together both experts and novices on a specific topic. An algorithm to automatically discover relationships between employees based on their profiles is described. These relationships can be used to initiate communities of practice. The algorithms are validated by means of a realistic dataset.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A comparative study of five parallel programming languages. Many different paradigms for parallel programming exist, nearly each of which is employed in dozens of languages. Several researchers have tried to compare these languages and paradigms by examining the expressivity and flexibility of their constructs. Few attempts have been made, however, at practical studies based on actual programming experience with multiple languages. Such a study is the topic of this paper. We will look at five parallel languages, all based on different paradigms. The languages are: SR (based on message passing), Emerald (concurrent objects), Parlog (parallel Horn clause logic), Linda (Tuple Space), and Orca (logically shared data). We have implemented the same parallel programs in each language, using real parallel machines. The paper reports on our experiences in implementing three frequently occurring communication patterns: message passing through a mailbox, one-to-many communication, and access to replicated shared data.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Dual-Clustering-Based Hyperspectral Band Selection by Contextual Analysis. Hyperspectral image (HSI) involves vast quantities of information that can help with the image analysis. However, this information has sometimes been proved to be redundant, considering specific applications such as HSI classification and anomaly detection. To address this problem, hyperspectral band selection is viewed as an effective dimensionality reduction method that can remove the redundant components of HSI. Various HSI band selection methods have been proposed recently, and the clustering-based method is a traditional one. This agglomerative method has been considered simple and straightforward, while the performance is generally inferior to the state of the art. To tackle the inherent drawbacks of the clustering-based band selection method, a new framework concerning on dual clustering is proposed in this paper. The main contribution can be concluded as follows: 1) a novel descriptor that reveals the context of HSI efficiently; 2) a dual clustering method that includes the contextual information in the clustering process; 3) a new strategy that selects the cluster representatives jointly considering the mutual effects of each cluster. Experimental results on three real-world HSIs verify the noticeable accuracy of the proposed method, with regard to the HSI classification application. The main comparison has been conducted among several recent clustering-based band selection methods and constraint-based band selection methods, demonstrating the superiority of the technique that we present.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
The Conical Methodology and the evolution of simulation model development Originating with ideas generated in the mid-1970s, the Conical Methodology (CM) is the oldest procedural approach to simulation model development. This evolutionary overview describes the principles underlying the CM, the environment structured according to these principles, and the capabilities for large complex simulation modeling tasks not provided in textbook descriptions. The CM is an object-oriented, hierarchical specification language that iteratively prescribes object attributes in a definitional phase that is topdown, followed by a specification phase that is bottom-up. The intent is to develop successive model representations at various levels of abstraction that can be diagnosed for correctness, completeness, consistency, and other characteristics prior to implementation as an executable program. Related or competitive approaches, throughout the evolutionary period are categorized as emanating from: artificial intelligence, mathematical programming, software engineering, conceptual modeling, systems theory, logic-based theory, or graph theory. Work in each category is briefly described.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Towards an Automatic Integration of Statecharts The integration of statecharts is part of an integration methodology for object oriented views. Statecharts are the most important language for the representation of the behaviour of objects and are used in many object oriented modeling techniques, e.g. in UML ([23]). In this paper we focus on the situation where the behaviour of an object type is represented in several statecharts, which have to be integrated into a single statechart. The presented approach allows an automatic integration process but gives the designer possibilities to make own decisions to guide the integration process and to achieve qualitative design goals.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An Overview on Wavelets in Source Coding, Communications, and Networks. The use of wavelets in the broad areas of source coding, communications, and networks is surveyed. Specifically, the impact of wavelets and wavelet theory in image coding, video coding, image interpolation, image-adaptive lifting transforms, multiple-description coding, and joint source-channel coding is overviewed. Recent contributions in these areas arising in subsequent papers of the present special issue are described.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An Efficient Polyphase Filter Based Resampling Method for Unifying the PRFs in SAR Data As current airborne and spaceborne synthetic aperture radar (SAR) systems aim to produce higher resolution and wider area products, their associated complexities call for handling stricter requirements. Variable and higher pulse repetition frequencies (PRFs) are increasingly being used to achieve these demanding requirements in modern radar systems. This paper presents a resampling scheme capable of unifying and downsampling variable PRFs within a single look complex (SLC) SAR acquisition and across a repeat pass sequence of acquisitions down to an effective lower PRF through the use of polyphase filters. To evaluate the performance of this resampling scheme, we use airborne SAR raw data with variable PRFs. The data were processed with and without the proposed resampling method as part of the flow of the imaging algorithm. Significant improvement in the point spread function (PSF) measurement and the visible image quality after rate conversion and normalization justify the theoretical basis of the proposed method and the benefits it can provide in application scenarios.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Strategic Approach to Transformational Design Designing parallel systems in a correct way is difficult. Transforma- tional design of systems guarantees correctness by the corr ectness of the transfor- mations, but is often tedious and complicated. We discuss different transformation strategies to guide the designer from the initial specification to different implemen- tations, tailored to different architectures. Strategies give rise to simpler transforma- tion rules, point the way in the design trajectory, and allow for the reuse of proofs and transformation steps when deriving optimizations and variants of algorithms.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Concert: design of a multiprocessor development system Concert is a shared-memory multiprocessor testbed intended to facilitate experimentation with parallel programs and programming languages. It consists of up to eight clusters, with 4-8 processors in each cluster. The processors in each cluster communicate using a shared bus, but each processor also has a private path to some memory. The novel feature of Concert is the RingBus, a segmented bus in the shape of a ring that permits communication between clusters at relatively low cost. Efficient arbitration among requests to use the RingBus is a major challenge, which is met by a novel hardware organization, the criss-cross arbiter. Simulation of the Concert RingBus and arbiter show their performance to lie between that of a crossbar switch and a simple shared intercluster bus.
An architecture for mostly functional languages
A case study of parallel execution of a rule-based expert system We report on a case study of the potentials for parallel execution of the inference engine of EMYCIN, a rule-based expert system. Multilisp, which supports parallel execution of tasks by means of thefuture construct, is used to implement the parallel version of the backwards-chaining inference engine. The study uses explicit specification of parallel execution and synchronization to attain parallel execution. It suggests some general techniques for obtaining parallel execution in expert systems and other applications.
Simulated Performance of a Reduction-Based Multiprocessor First Page of the Article
An assessment of multilisp: lessons from experience Multilisp is a parallel programming language derived from the Scheme dialect of Lisp by addition of thefuture construct. It has been implemented on Concert, a 32-processor shared-memory multiprocessor. A statistics-gathering feature of Concert Multilisp producesparallelism profiles showing the number of processors busy with computing or overhead, as a function of time. Experience gained using parallelism profiles and other measurement tools on several application programs has revealed three basic ways in whichfuture generates concurrency. These ways are illustrated on two example programs: the Lisp mapping functionmapcar and the partitioning routine from Quicksort. Experience with Multilisp programming exposes issues relating to side effects, error and exception handling, low-level operations for explicit manipulation of futures and tasks, and speculative computing, which are also discussed. The basic outlines of Multilisp are now fairly clear and have stood the test of being used for several applications, but further language design work is especially needed in the areas of speculative computing and exception handling.
A bidirectional data driven Lisp engine for the direct execution of Lisp in parallel
Queue-based multi-processing LISP As the need for high-speed computers increases, the need for multi-processors will be become more apparent. One of the major stumbling blocks to the development of useful multi-processors has been the lack of a good multi-processing language—one which is both powerful and understandable to programmers. Among the most compute-intensive programs are artificial intelligence (AI) programs, and researchers hope that the potential degree of parallelism in AI programs is higher than in many other applications. In this paper we propose multi-processing extensions to Lisp. Unlike other proposed multi-processing Lisps, this one provides only a few very powerful and intuitive primitives rather than a number of parallel variants of familiar constructs.
Distributed cooperation with action systems Action systems provide a method to program distributed systems that emphasizes the overall behavior of the system. System behavior is described in terms of the possible interactions (actions) that the processes can engage in, rather than in terms of the sequential code that the processes execute. The actions provide a symmetric communication mechanism that permits an arbitrary number of processes to be synchronized by a common handshake. This is a generalization of the usual approach, employed in languages like CSP and Ada, in which communication is asymmetric and restricted to involve only two processes. Two different execution models are given for action systems: a sequential one and a concurrent one. The sequential model is easier to use for reasoning, and is essentially equivalent to the guarded iteration statement by Dijkstra. It is well suited for reasoning about system properties in temporal logic, but requires a stronger fairness notion than it is reasonable to assume a distributed implementation will support. The concurrent execution model reflects the true concurrency that is present in a distributed execution, and corresponds to the way in which the system is actually implemented. An efficient distributed implementation of action systems on a local area network is described. The fairness assumptions of the concurrent model can be guaranteed in this implementation. The relationship between the two execution models is studied in detail in the paper. For systems that will be called fairly serializable, the two models are shown to be equivalent. Proof methods are given for verifying this property of action systems. It is shown that for fairly serializable systems, properties that hold for any concurrent execution of the system can be established by temporal proofs that are conducted entirely within the simpler sequential execution model.
Automatic verification of finite-state concurrent systems using temporal logic specifications We give an efficient procedure for verifying that a finite-state concurrent system meets a specification expressed in a (propositional, branching-time) temporal logic. Our algorithm has complexity linear in both the size of the specification and the size of the global state graph for the concurrent system. We also show how this approach can be adapted to handle fairness. We argue that our technique can provide a practical alternative to manual proof construction or use of a mechanical theorem prover for verifying many finite-state concurrent systems. Experimental results show that state machines with several hundred states can be checked in a matter of seconds.
Using emoticons to reduce dependency in machine learning techniques for sentiment classification Sentiment Classification seeks to identify a piece of text according to its author's general feeling toward their subject, be it positive or negative. Traditional machine learning techniques have been applied to this problem with reasonable success, but they have been shown to work well only when there is a good match between the training and test data with respect to topic. This paper demonstrates that match with respect to domain and time is also important, and presents preliminary experiments with training data labeled with emoticons, which has the potential of being independent of domain, topic and time.
Managing inconsistent specifications: reasoning, analysis, and action In previous work, we advocated continued development of specifications in the presence of inconsistency. To support this, we used classical logic to represent partial specifications and to identify inconsistencies between them. We now present an adaptation of classical logic, which we term quasi-classical (QC) logic, that allows continued reasoning in the presence of inconsistency. The adaptation is a weakening of classical logic that prohibits all trivial derivations, but still allows all resolvants of the assumptions to be derived. Furthermore, the connectives behave in a classical manner. We then present a development called labeled QC logic that records and tracks assumptions used in reasoning. This facilitates a logical analysis of inconsistent information. We discuss that application of labeled QC logic in the analysis of multiperspective specifications. Such specifications are developed by multiple particpants who hold overlapping, often inconsistent, views of the systems they are developing.
ScenIC: A Strategy for Inquiry-Driven Requirements Determination ScenIC is a requirements engineering method for evolving systems. Derived from the Inquiry Cycle model of requirements refinement, it uses goal refinement and scenario analysis as its primary methodological strategies. ScenIC rests on an analogy with human memory: semantic memory consists of generalizations about system properties; episodic memory consists of specific episodes and scenarios; and working memory consists of reminders about incomplete refinements. Method-specific reminders and resolution guidelines are activated by the state of episodic or semantic memory. The paper presents a summary of the ScenIC strategy and guidelines.
Miro: Visual Specification of Security Miro is a set of languages and tools that support the visual specification of file system security. Two visual languages are presented: the instance language, which allows specification of file system access, and the constraint language, which allows specification of security policies. Miro visual languages and tools are used to specify security configurations. A visual language is one whose entities are graphical, such as boxes and arrows, specifying means stating independently of any implementation the desired properties of a system. Security means file system protection: ensuring that files are protected from unauthorized access and granting privileges to some users, but not others. Tools implemented and examples of how these languages can be applied to real security specification problems are described.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.073888
0.075479
0.051319
0.037904
0.022281
0.003229
0.000547
0.000025
0
0
0
0
0
0
Delay-dependent robust H∞ control for uncertain discrete-time fuzzy systems with time-varying delays This paper deals with the robust H∞ control problem for discrete-time Takagi-Sugeno (T-S) fuzzy systems with norm-bounded parametric uncertainties and interval time-varying delays. First, based on a new Lyapunov functional, we present a sufficient condition guaranteeing that the resulting closed-loop system is robustly stable and satisfies a prescribed H∞ performance level. The Lyapunov functional used here depends on not only the fuzzy basis function but on the lower and upper bounds of the time-varying delay as well. Second, two classes of delay-dependent conditions for the existence of the concerned H∞ fuzzy controllers are given in terms of relaxed linear matrix inequalities (LMIs), and a desired controller can be designed by using the solutions to these LMIs. Finally, a numerical example is provided to demonstrate the effectiveness of the proposed design method.
Robust sliding-mode control for uncertain time-delay systems: an LMI approach This note is devoted to robust sliding-mode control for time-delay systems with mismatched parametric uncertainties. A delay-independent sufficient condition for the existence of linear sliding surfaces is given in terms of linear matrix inequalities, based on which the corresponding reaching motion controller is also developed. The results are illustrated by an example.
Nonsynchronized-State estimation of multichannel networked nonlinear systems with multiple packet dropouts Via TS Fuzzy-Affine dynamic models This paper investigates the problem of robust ℋ∞state estimation for a class of multichannel networked nonlinear systems with multiple packet dropouts. The nonlinear plant is represented by TakagiSugeno (TS) fuzzy-affine dynamic models with norm-bounded uncertainties, and stochastic variables with general probability distributions are adopted to characterize the data missing phenomenon in output channels. The objective is to design an admissible state estimator guaranteeing the stochastic stability of the resulting estimation-error system with a prescribed ℋ ∞disturbance attenuation level. It is assumed that the plant premise variables, which are often the state variables or their functions, are not measurable so that the estimator implementation with state-space partition may not be synchronized with the state trajectories of the plant. Based on a piecewise-quadratic Lyapunov function combined with S -procedure and some matrix-inequality-convexifying techniques, two different approaches are developed to robust filtering design for the underlying TS fuzzy-affine systems with unreliable communication links. All the solutions to the problem are formulated in the form of linear-matrix inequalities (LMIs). Finally, simulation examples are provided to illustrate the effectiveness of the proposed approaches. © 2006 IEEE.
Robust H/sub /spl infin// control for linear discrete-time systems with norm-bounded nonlinear uncertainties. This paper studies the problem of robust control of a class of uncertain discrete-time systems. The class of uncertain systems is described by a state-space model with linear nominal parts and norm-bounded nonlinear uncertainties in the state and output equations. The authors address the problem of robust H/sub /spl infin// control in which both robust stability and a prescribed H/sub /spl infin// performance are required to be achieved, irrespective of the uncertainties. It has been shown that instead of the nonlinear uncertain system, one may only consider a related linear uncertain system and thus a linear static state feedback control law is designed, which is in terms of a Riccati inequality.
New approach on robust delay-dependent H∞ control for uncertain T-S fuzzy systems with interval time-varying delay This paper investigates the robust H∞ control for Takagi-Sugeno (T-S) fuzzy systems with interval time-varying delay. By employing a new and tighter integral inequality and constructing an appropriate type of Lyapunov functional, delay-dependent stability criteria are derived for the control problem. Because neither any model transformation nor free weighting matrices are employed in our theoretical derivation, the developed stability criteria significantly improve and simplify the existing stability conditions. Also, the maximum allowable upper delay bound and controller feedback gains can be obtained simultaneously from the developed approach by solving a constrained convex optimization problem. Numerical examples are given to demonstrate the effectiveness of the proposed methods.
Stability Analysis for Neural Networks With Time-Varying Delay via Improved Techniques. This paper is concerned with the stability problem for neural networks with a time-varying delay. First, an improved generalized free-weighting-matrix integral inequality is proposed, which encompasses the conventional one as a special case. Second, an improved Lyapunov-Krasovskii functional is constructed that contains two complement triple-integral functionals. Third, based on the improved techniques, a new stability condition is derived for neural networks with a time-varying delay. Finally, two widely used numerical examples are given to demonstrate that the proposed stability condition is very competitive in both conservatism and complexity.
Stability and stabilization of T-S fuzzy systems with time delay via Wirtinger-based double integral inequality. This paper concerns the issue of stabilization and stability analysis for Takagi-Sugeno (T-S) fuzzy systems with time delay. A new type of Lyapunov-Krasovskii functional (LKF), including Non-Quadratic Lyapunov functional and triple integral term, is introduced to obtain stability conditions of fuzzy time-delay systems. A Wirtinger-based double integral inequality is used to estimate integral term, and the free weighting variable technique is also employed for controller syntheses and stability analysis. Additionally, less conservative and newer stability conditions with delay-dependent are proposed in the form of linear matrix inequalities (LMIs). Furthermore, Several examples are provided to illustrate how effective the suggested approaches are. (c) 2017 Elsevier B.V. All rights reserved.
Further Results on Stabilization of Chaotic Systems Based on Fuzzy Memory Sampled-Data Control. This note investigates sampled-data control for chaotic systems. A memory sampled-data control scheme that involves a constant signal transmission delay is employed for the first time to tackle the stabilization problem for Takagi-Sugeno fuzzy systems. The advantage of the constructed Lyapunov functional lies in the fact that it is neither necessarily positive on sampling intervals nor necessarily...
Direct adaptive interval type-2 fuzzy control of multivariable nonlinear systems A fuzzy logic controller equipped with a training algorithm is developed such that the H∞ tracking performance should be satisfied for a model-free nonlinear multiple-input multiple-output (MIMO) system, with external disturbances. Due to universal approximation theorem, fuzzy control provides nonlinear controller, i.e., fuzzy logic controllers, to perform the unknown nonlinear control actions and the tracking error, because of the matching error and external disturbance is attenuated to arbitrary desired level by using H∞ tracking design technique. In this paper, a new direct adaptive interval type-2 fuzzy controller is developed to handle the training data corrupted by noise or rule uncertainties for nonlinear MIMO systems involving external disturbances. Therefore, linguistic fuzzy control rules can be directly incorporated into the controller and combine the H∞ attenuation technique. Simulation results show that the interval type-2 fuzzy logic system can handle unpredicted internal disturbance, data uncertainties, very well, but the adaptive type-1 fuzzy controller must spend more control effort in order to deal with noisy training data. Furthermore, the adaptive interval type-2 fuzzy controller can perform successful control and guarantee the global stability of the resulting closed-loop system and the tracking performance can be achieved.
Integrating noninterfering versions of programs The need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. To date, the only available tools for assisting with program integration are variants of text-based differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated.This paper concerns the design of a semantics-based tool for automatically integrating program versions. The main contribution of the paper is an algorithm that takes as input three programs A, B, and Base, where A and B are two variants of Base. Whenever the changes made to Base to create A and B do not “interfere” (in a sense defined in the paper), the algorithm produces a program M that integrates A and B. The algorithm is predicated on the assumption that differences in the behavior of the variant programs from that of Base, rather than differences in the text, are significant and must be preserved in M. Although it is undecidable whether a program modification actually leads to such a difference, it is possible to determine a safe approximation by comparing each of the variants with Base. To determine this information, the integration algorithm employs a program representation that is similar (although not identical) to the dependence graphs that have been used previously in vectorizing and parallelizing compilers. The algorithm also makes use of the notion of a program slice to find just those statements of a program that determine the values of potentially affected variables.The program-integration problem has not been formalized previously. It should be noted, however, that the integration problem examined here is a greatly simplified one; in particular, we assume that expressions contain only scalar variables and constants, and that the only statements used in programs are assignment statements, conditional statements, and while-loops.
The operational versus the conventional approach to software development The conventional approach to software development is being challenged by new ideas, many of which can be organized into an alternative decision structure called the “operational” approach. The operational approach is explained and compared to the conventional one.
Supporting Multi-Perspective Requirements Engineering Supporting collaborating requirements engineers as theyindependently construct a specification is highly desirable.Here, we show how collaborative requirements engineeringcan be supported using a planner, domain abstractions, andautomated decision science techniques. In particular, weshow how requirements conflict resolution can be assistedthrough a combination of multi-agent multicriteria optimizationand heuristic resolution generation. We then summarizethe use of our tool to...
Kaisa Sere: In Memoriam.
Lossless image compression utilizing reference points coding This paper proposes a lossless image compression method utilizing the neighboring pixels to determine the reference point values. The proposed method scans every pixel row by row and assigns a 2-bit reference point value to each pixel by comparing its intensity value to the neighboring pixels' intensity values. The intensity value will be stored to a new file only when the comparison fails to find a neighborhood pixel with the same intensity value. The compression is achieved as only the information of 2-bit reference point values for all pixels and certain intensity values are required for storage. The suggested method is tested on various types of images and the results show that it performs well for most of the images.
1.111497
0.133912
0.089275
0.066968
0.044767
0.022222
0.000366
0.000066
0.000013
0
0
0
0
0
La représentation de points de vue dans le système d'aide à la décision en cancérologie KASIMIR In this paper, we introduce the knowledge representation ba sed on viewpoints on which relies the KASIMIR system, aimed at decision helping in oncology. The design ofview- points is considered on both theoretical and practical leve ls, and takes its place in the range of work on the subject that has a rather long history in the dom ain of object-based knowledge representation systems. From the theoretical side, the vie wpoints are considered within the dis- tributed description logic C-OWL, that allows the explicitrepresentation and manipulation of viewpoints. From the practical side, an operational implem entation of viewpoints in C-OWL within an application in oncology shows how viewpoints are d esigned, and how they can be
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Thoughts on the software process
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Multi-criteria decision making in ontologies Decision support is one of the main objectives of ontology-based knowledge management systems. However, there is no standard method that would define how to model decisions in ontologies. Despite many research efforts and established methods for decision modelling and support, they have not yet been systematically applied to the field of ontologies. This paper proposes an ontology based multi-criteria decision making method that enables one to define decision models using ontology as the base construct. It structures decision models in such a way that the problem solution can be obtained by reasoning upon the ontology. We propose a generic approach that can be applied to an arbitrary domain. The proposed method is based on qualitative multi-criteria decision making, which is applied to the field of ontologies. OWL is used as the ontology representation language. As a proof of concept we have developed an ontology-based decision support system for an electric power transmission company. The proposed method represents an important step forward in the field of ontologies. Its main advantages are: a higher level of decision support provided by the ontology, direct use of information captured in ontology for decision making, a higher level of business process automation, and reuse of decision model concepts in definitions of more complex ontology concepts.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Superposition: composition vs refinement of non-deterministic, action-based systems The traditional notion of superposition has been used for supporting two distinct aspects of parallel program design: composition and refinement. This is because, when trace-based semantics of concurrency are considered, which is typical of most formal methods, these two relationships are modelled as inclusion between sets of behaviours. However, when forms of non-deterministic behaviour have to be considered, which is the case for component and service-based development, these two aspects do not coincide. In this paper, we show how the two roles of superposition can be separated and supported at the language and semantic levels. For this purpose, we use a categorical formalisation of program design in the language CommUnity that we are also using for addressing architectural concerns, another area in which the distinction between composition and refinement is particularly important.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Supporting user participation design using a fuzzy analytic hierarchy process approach There are three fundamental problems that may occur in the process of user participation design: first, the participants/users may not be able to express their requirements clearly; second, they have little knowledge about design; and third, they are generally unfamiliar with the software that designers use. Based on this understanding, a method that considers design rationale is proposed in this work to support the process of user participation design. In addressing the user participation process, a fuzzy analytic hierarchy process (AHP) approach is applied to grasp people's ideas, in the initial design phase. A case study on creating house layout design is employed to illustrate the proposed approach. In this regard, to help participants/users create layout designs, it is proposed that a 3D generative system is used, which integrates navigational concepts, direct manipulation, and the design rationale theory. In a nutshell, this research proposes a system to implement a design rational model and improve design communication in the user participation process. To demonstrate the effectiveness of the proposed prototype system, a user test is performed and we put forward some findings and research questions for further research and industry practices.
Fuzzy clustering analysis for optimizing fuzzy membership functions Fuzzy model identification is an application of fuzzy inference system for identifying unknown functions, for a given set of sampled data. The most important thing for fuzzy identification task is to decide the parameters of membership functions (MFs) used in fuzzy systems. A lot of efforts (Chung and Lee, 1994; Jang, 1993; Sun and Jang, 1993) have been given to initialize the parameters of fuzzy membership functions. However, the problems of parameter identification were not solved formally. Assessments of these algorithms are discussed in the paper. Based on the fuzzy c-means (FCM) Bezdek (1987) clustering algorithm, we propose a heuristic method to calibrate the fuzzy exponent iteratively. A hybrid learning algorithm for refining the system parameters is then presented. Examples are demonstrated to show the effectiveness of the proposed method, comparing with the equalized universe method (EUM) and subtractive clustering method (SCM) Chiu (1994). The simulation results indicate the general applicability of our methods to a wide range of applications.
Collaborative clustering with the use of Fuzzy C-Means and its quantification In this study, we introduce the concept of collaborative fuzzy clustering-a conceptual and algorithmic machinery for the collective discovery of a common structure (relationships) within a finite family of data residing at individual data sites. There are two fundamental features of the proposed optimization environment. First, given existing constraints which prevent individual sites from exchanging detailed numeric data, any communication has to be realized at the level of information granules. The specificity of these granules impacts the effectiveness of ensuing collaborative activities. Second, the fuzzy clustering realized at the level of the individual data site has to constructively consider the findings communicated by other sites and act upon them while running the optimization confined to the particular data site. Adhering to these two general guidelines, we develop a comprehensive optimization scheme and discuss its two-phase character in which the communication phase of the granular findings intertwines with the local optimization being realized at the level of the individual site and exploits the evidence collected from other sites. The proposed augmented form of the objective function is essential in the navigation of the overall optimization that has to be completed on a basis of the data and available information granules. The intensity of collaboration is optimized by choosing a suitable tradeoff between the two components of the objective function. The objective function based clustering used here concerns the well-known Fuzzy C-Means (FCM) algorithm. Experimental studies presented include some synthetic data, selected data sets coming from the machine learning repository and the weather data coming from Environment Canada.
Design of information granule-oriented RBF neural networks and its application to power supply for high-field magnet To realize effective modeling and secure accurate prediction abilities of models for power supply for high-field magnet (PSHFM), we develop a comprehensive design methodology of information granule-oriented radial basis function (RBF) neural networks. The proposed network comes with a collection of radial basis functions, which are structurally as well as parametrically optimized with the aid of information granulation and genetic algorithm. The structure of the information granule-oriented RBF neural networks invokes two types of clustering methods such as K-Means and fuzzy C-Means (FCM). The taxonomy of the resulting information granules relates to the format of the activation functions of the receptive fields used in RBF neural networks. The optimization of the network deals with a number of essential parameters as well as the underlying learning mechanisms (e.g., the width of the Gaussian function, the numbers of nodes in the hidden layer, and a fuzzification coefficient used in the FCM method). During the identification process, we are guided by a weighted objective function (performance index) in which a weight factor is introduced to achieve a sound balance between approximation and generalization capabilities of the resulting model. The proposed model is applied to modeling power supply for high-field magnet where the model is developed in the presence of a limited dataset (where the small size of the data is implied by high costs of acquiring data) as well as strong nonlinear characteristics of the underlying phenomenon. The obtained experimental results show that the proposed network exhibits high accuracy and generalization capabilities.
From fuzzy data analysis and fuzzy regression to granular fuzzy data analysis This note offers some personal views on the two pioneers of fuzzy sets, late Professors Hideo Tanaka and Kiyoji Asai. The intent is to share some personal memories about these remarkable researchers and humans, highlight their long-lasting research accomplishments and stress a visible impact on the fuzzy set community.The note elaborates on new and promising research avenues initiated by fuzzy regression and identifies future developments of these models emerging within the realm of Granular Computing and giving rise to a plethora of granular fuzzy models and higher-order and higher-type granular constructs.
A parametric model for determining consensus priority vectors from fuzzy comparison matrices. We consider a group decision-making problem where a set of alternatives have to be ranked according to fuzzy preference judgments given by multiple experts. We assume that expert assessments are expressed in the form of fuzzy multiplicative preference relations or fuzzy comparison matrices. In this paper we propose a general model to generate crisp priority weights of the alternatives from possibly inconsistent and conflicting fuzzy preference relations. We express our model in terms of matrix approximations to address the consistency problem and we use weighted metrics to simulate the group dynamics. Matrix approximation techniques for deriving a common crisp consistent matrix are extended to work with fuzzy matrices by using the concept of α-cuts. In the aggregation process, the importance of each expert is taken into consideration according to the agreement of the group with the expert. This results in a parametric optimization problem for which a computational formulation is given.
Building the fundamentals of granular computing: A principle of justifiable granularity The study introduces and discusses a principle of justifiable granularity, which supports a coherent way of designing information granules in presence of experimental evidence (either of numerical or granular character). The term ''justifiable'' pertains to the construction of the information granule, which is formed in such a way that it is (a) highly legitimate (justified) in light of the experimental evidence, and (b) specific enough meaning it comes with a well-articulated semantics (meaning). The design process associates with a well-defined optimization problem with the two requirements of experimental justification and specificity. A series of experiments is provided as well as a number of constructs carried for various formalisms of information granules (intervals, fuzzy sets, rough sets, and shadowed sets) are discussed as well.
Formal verification for fault-tolerant architectures: prolegomena to the design of PVS PVS is the most recent in a series of verification systems developed at SRI. Its design was strongly influenced, and later refined, by our experiences in developing formal specifications and mechanically checked verifications for the fault-tolerant architecture, algorithms, and implementations of a model 驴reliable computing platform驴 (RCP) for life-critical digital flight-control applications, and by a collaborative project to formally verify the design of a commercial avionics processor called AAMP5. Several of the formal specifications and verifications performed in support of RCP and AAMP5 are individually of considerable complexity and difficulty. But in order to contribute to the overall goal, it has often been necessary to modify completed verifications to accommodate changed assumptions or requirements, and people other than the original developer have often needed to understand, review, build on, modify, or extract part of an intricate verification. In this paper, we outline the verifications performed, present the lessons learned, and describe some of the design decisions taken in PVS to better support these large, difficult, iterative, and collaborative verifications.
Applications experience with Linda We describe three experiments using C-Linda to write parallel codes. The first involves assessing the similarity of DNA sequences. The results demonstrate Linda's flexibility—Linda solutions are presented that work well at two quite different levels of granularity. The second uses a prime finder to illustrate a class of algorithms that do not (easily) submit to automatic parallelizers, but can be parallelized in straight-forward fashion using C-Linda. The final experiment describes the process lattice model, an “inherently” parallel application that is naturally conceived as multiple interacting processes. Taken together, the experience described here bolsters our claim that Linda can bridge the gap between the growing collection of parallel hardware and users eager to exploit parallelism.This work is supported by the NSF under grants DCR-8601920 and DCR-8657615 and by the ONR under grant N00014-86-K-0310. We are grateful to Argonne National Labs for providing access to a Sequent Symmetry.
Class Refinement and Interface Refinement in Object-Oriented Programs
Unintrusive Ways to Integrate Formal Specifications in Practice Formal methods can be neatly woven in with less formal, but more widely-used, industrial-strength methods. We show how to integrate the Larch two-tiered specification method (GHW85a) with two used in the waterfall model of software development: Structured Analysis (Ros77) and Structure Charts (YC79). We use Larch traits to define data elements in a data dictionary and the functionality of basic activities in Structured Analysis data-flow diagrams; Larc h interfaces and traits to define the behavior of modules in Structure Charts. We also show how to integrate loosely formal specification in a prototyping model by discussing ways of refining Larch specifications as code evolves. To provide some realism to our ideas, we draw our examples from a non-trivial Larch specification of the graphical editor for the Miro visual languages (HMT +90). The companion technical report, CMU-CS-91-111, contains the entire specification.
A compile-time scheduling heuristic for interconnection-constrained heterogeneous processor architectures The authors present a compile-time scheduling heuristic called dynamic level scheduling, which accounts for interprocessor communication overhead when mapping precedence-constrained, communicating tasks onto heterogeneous processor architectures with limited or possibly irregular interconnection structures. This technique uses dynamically-changing priorities to match tasks with processors at each step, and schedules over both spatial and temporal dimensions to eliminate shared resource contention. This method is fast, flexible, widely targetable, and displays promising performance
Scalable Hyperspectral Image Coding Here we propose scalable Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK)-an embedded, block-based, wavelet transform coding algorithm of low complexity for hyperspectral image compression. Scalable 3D-SPECK supports both SNR and resolution progressive coding. After wavelet transform, 3D-SPECK treats each subband as a coding block. To generate SNR scalable bitstream, the stream is organized so that the same indexed bit planes are put together across coding blocks and subbands, so that the higher bit planes precede the lower ones. To generate resolution scalable bitstreams, each subband is encoded separately to generate a sub-bitstream. Rate is allocated amongst the sub-bitstream produced for each block. To decode the image sequence to a particular level at a given rate, we need to encode each subband at a higher rate so that the algorithm can truncate the sub-bitstream to the assigned rate. Resolution scalable 3D-SPECK is efficient for the application of an image server. Results show that scalable 3D-SPECK provides excellent performance on hyperspectral image compression.
MoMut::UML Model-Based Mutation Testing for UML
1.111845
0.10302
0.10302
0.10302
0.10302
0.061845
0.015658
0
0
0
0
0
0
0
Logic and the structure of space: towards a visual logic for spatial reasoning Since their early days logic programming techniques have been used as a tool for the description, specification, and analysis of languages, in the area of formal languages as well as for natural languages. Even entire compilers have been built on the basis of techniques like Definite Clause Grammars (War80, CH87). In the last decade the range of languages used for human computer interaction has been broadened. Today visual languages, i.e., languages that uti- lize diagrams or other spatial, graphical representations, are becoming more and more important. Clearly, a formal framework for the development of visual languages is a desirable aim and could contribute a lot to that field of re- search, especially if a single framework is capable of supporting the entire range from formal specifications of pic- tures to executable picture parsers. Logic, which is a proven framework for handling sequential, textual languages, can well be employed as the formal basis of such a framework. The poster will present a logic for reasoning about visual structures which is derived from standard Horn clause logic by augmenting it with means to specify multidimensional spatial arrangements. This extension (called picture logic in the following) retains all the properties of standard Horn clauses so that well-known deduction techniques, logic programming, logic grammars, etc. can readily be adapted. In picture logic spatial properties can be expressed by the use of abstract example pictures. Thus referring to picture logic as a visual logic addresses both aspects at once: the logic is used to reason about visual structures and in turn is a visual language itself. Most picture specification languages developed so far are derived from standard grammar formalisms that stem from the realm of classical compiler construction techniques (CC90, Pf92). Usually these are of limited expressive- ness and cannot be used to analyse certain types of picture properties. The poster will give examples for such prob- lems like non-tree like picture structures, etc. and how they can be specified with picture logic. There are some other approaches to logic specification tools for pictures (HM90) as well as to relational languag- es for picture specification (FPT+91) that are structurally very close to logic languages. Their common approach is to express spatial relations of objects by formulae like or in a relational model which roughly is the same. The two major drawbacks of these approaches are: (1) The representation of mul- tidimensional (visual) facts is squeezed into a onedimensional (textual) representation. Therefore it becomes difficult, if not impossible, to build an intuitive connection between a description (a ruleset) and a picture (the described ob- ject). Furthermore the number of facts and rules required to describe a picture is obviously very fast growing even for simple pictures. (2) As the picture to be processed has to be given as a set of facts (axioms), none of the originally given spatial relations in the picture can be made invalid (retracted) during a derivation process if a monotonic logic is used. Both shortcomings of these more conventional approaches can easily be overcome by picture logic since it uses terms instead of facts to capture spatial structure and since these terms are visual themselves. Picture Terms and Picture Unification The approach to model spatial structures in logic programs by special types of terms suggests itself when we compare how, e.g., the structure of a mathematical term can be analysed by a standard logic rule like , which ex- presses the chain rule. There, the input term is unified with non-ground terms describing partial aspects of the term's structure. By means of unification the term is broken up into subterms which are bound to variables and passed on to other rules for further analysis. In analogy to this picture logic lets one write rules like the one in figure 1 which could be used to test whether a given diagram contains two areas w ith a non-empty overlap. What exactly is a picture term? A picture term can be re- garded as a not necessarily connected, directed, acylic, bi- partite graph with a special visualization. The exact structure of a picture term graph that belongs to a picture term is determined by a so-called picture language that
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Performance evaluation of 3D hybrid transforms and 2D-set partitioning methods for lossy hyperspectral data compression. Three dimensional nature of hyperspectral data with huge amount of correlation in spatial and spectral domain makes transform coding methods more efficient for compression. Transform methods concentrate signal power in a few coefficients resulting in better low bit rate performance with low computational complexity. A set of 3D hybrid transforms obtained by combining various 1D spectral decorrelator and 2D spatial decorrelator are investigated for their performance evaluation. Wavelet-based methods generate clustered coefficients having parent–child relationship between the subbands. This property can be exploited by entropy encoders to generate bit streams. For entropy encoding, various 2D-set partitioning methods are studied. 2D-set partitioning in hierarchical trees and 2D-tree block encoding exploit parent–child relationship, and 2D-set partitioning in embedded blocks exploits spatial correlation between neighboring pixels within the sub-band in space and frequency of transformed band images. 2D-set partitioning in blocks of hierarchical trees (2D-SPBHT) exploits energy clustering as well as tree structure of wavelet transform simultaneously. It is shown that 2D-SPBHT provides better performance at all the bitrates as compared to other 2D-set partitioning methods irrespective of the 3D transformation used.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Using data groups to specify and check side effects Reasoning precisely about the side effects of procedure calls is important to many program analyses. This paper introduces a technique for specifying and statically checking the side effects of methods in an object-oriented language. The technique uses data groups, which abstract over variables that are not in scope, and limits program behavior by two alias-confining restrictions, pivot uniqueness and owner exclusion. The technique is shown to achieve modular soundness and is simpler than previous attempts at solving this problem.
Enhancing the Pre- and Postcondition Technique for More Expressive Specifications . We describe enhancements to the pre- and postconditiontechnique that help specifications convey information more effectively.Some enhancements allow one to specify redundant information that canbe used in &quot;debugging&quot; specifications. For instance, adding examples toa specification gives redundant information that may aid some readers,and can also be used to help ensure that the specification says what isintended. Other enhancements allow improvements in frame axioms forobject-oriented...
Writing Larch interface language specifications Current research in specifications is emphasizing the practical use of formal specifications in program design. One way to encourage their use in practice is to provide specification languages that are accessible to both designers and programmers. With this goal in mind, the Larch family of formal specification languages has evolved to support a two-tiered approach to writing specifications. This approach separates the specification of state transformations and programming language dependencies from the specification of underlying abstractions. Thus, each member of the Larch family has a subset derived from a programming language and another subset independent of any programming languages. We call the former interface languages, and the latter the Larch Shared Language.This paper focuses on Larch interface language specifications. Through examples, we illustrate some salient features of Larch/CLU, a Larch interface language for the programming language CLU. We give an example of writing an interface specification following the two-tiered approach and discuss in detail issues involved in writing interface specifications and their interaction with their Shared Language components.
A specifier's introduction to formal methods Formal methods used in developing computer systems (i.e. mathematically based techniques for describing system properties) are defined, and their role is delineated. Formal specification languages, which provide the formal method's mathematical basis, are examined. Certain pragmatic concerns about formal methods and their users, uses, and characteristics are discussed. Six well-known or commonly used formal methods are illustrated by simple examples. They are Z, VDM, Larch, temporal logic, CSP, and transition axioms.<>
Tisa: A Language Design and Modular Verification Technique for Temporal Policies in Web Services Web services are distributed software components, that are decoupled from each other using interfaces with specified functional behaviors. However, such behavioral specifications are insufficient to demonstrate compliance with certain temporal non-functional policies. An example is demonstrating that a patient's health-related query sent to a health care service is answered only by a doctor (and not by a secretary). Demonstrating compliance with such policies is important for satisfying governmental privacy regulations. It is often necessary to expose the internals of the web service implementation for demonstrating such compliance, which may compromise modularity. In this work, we provide a language design that enables such demonstrations, while hiding majority of the service's source code. The key idea is to use greybox specifications to allow service providers to selectively hide and expose parts of their implementation. The overall problem of showing compliance is then reduced to two subproblems: whether the desired properties are satisfied by the service's greybox specification, and whether this greybox specification is satisfied by the service's implementation. We specify policies using LTL and solve the first problem by model checking. We solve the second problem by refinement techniques.
Duality in specification languages: a lattice-theoretical approach A very general lattice-based language of commands, based on theprimitive operations of substitution and test for equality, isconstructed. This base language permits unbounded nondeterminism,demonic and angelic nondeterminism. A dual language permitting miraclesis constructed. Combining these two languages yields an extended baselanguage which is complete, in the sense that all monotonic predicatetransformers can be constructed in it. The extended base languageprovides a unifying framework for various specification languages; weshow how two Dijkstra-style specification languages can be embedded init.—Authors' Abstract
A validation system for object oriented specifications of information systems In this paper, we present a set of software tools for developing and validating object oriented conceptual models specified in TROLL. TROLL is a formal object-oriented language for modelling information systems on a high level of abstraction. The tools include editors, syntax and consistency checkers as well as an animator which generates executable prototypes from the models on the same level of abstraction. In this way, the model behaviour can be observed and checked against the informal user requirements. After a short introduction in some validation techniques and research questions, we describe briefly the TROLL language as well as its graphical version OMTROLL. We then explain the system architecture and show its functionalities by a simplified example of an industrial application which is called CATC (ComputerAided Testing and Certifying).
Visualization of Path Expressions in a Virtual Object-Oriented Database Query Language Path expressions have been accepted for concisely manipulating the nested structures in complex object-oriented query expressions. However, previous visual query languages hardly represent such query expressions in a concise and intuitive way partly due to improper visual representation of path expressions and partly due to lack of well-defined syntax and semantics of languages. In this paper, we present visual modeling of path expressions in a visual object-oriented database query language called Visual Object-Oriented Query Language (VOQL) which has excellent expressive power for sets, simple and intuitive syntax, and well-defined semantics. This is enabled by explicitlySpecifying the semantics of multi-valued path expressions based on the visual notation capable of representing set relationships in addition to functional relationships. The basic visual constructs called blobs and nested blobs denote sets of objects that path expressions represent while the constructs called binding edges and flattening edges visually simulate the notions of variable binding and dot functions in path expressions respectively. Based on the constructs, the grammer of VOQL defines the syntactic components while the semantics of query expressions are provided by syntax-directed translation to the counterparts in the extended relational calculus. Also, the visual constructs allow modeling of restricted universal quantification with a visual scoping box and effectively represent nested quantification and recursive queries without semantic ambiguities. An explicit specification of the semantics of multi-valued path expressions in a concise and unified visual notation is new and visually clarifies the semanticsof quantified queries in the nested structures.
Protocol Verification Via Projections The method of projections is a new approach to reduce the complexity of analyzing nontrivial communication protocols. A protocol system consists of a network of protocol entities and communication channels. Protocol entities interact by exchanging messages through channels; messages in transit may be lost, duplicated as well as reordered. Our method is intended for protocols with several distinguishable functions. We show how to construct image protocols for each function. An image protocol is specified just like a real protocol. An image protocol system is said to be faithful if it preserves all safety and liveness properties of the original protocol system concerning the projected function. An image protocol is smaller than the original protocol and can typically be more easily analyzed. Two protocol examples are employed herein to illustrate our method. An application of this method to verify a version of the high-level data link control (HDLC) protocol is described in a companion paper.
System processes are software too This talk explores the application of software engineering tools, technologies, and approaches to developing and continuously improving systems by focusing on the systems' processes. The systems addressed are those that are complex coordinations of the efforts of humans, hardware devices, and software subsystems, where humans are on the “inside”, playing critical roles in the functioning of the system and its processes. The talk suggests that in such cases, the collection of processes that use the system is tantamount to being the system itself, suggesting that improving the system's processes amounts to improving the system. Examples of systems from a variety of different domains that have been addressed and improved in this way will be presented and explored. The talk will suggest some additional untried software engineering ideas that seem promising as vehicles for supporting system development and improvement, and additional system domains that seem ripe for the application of this kind of software-based process technology. The talk will emphasize that these applications of software engineering approaches to systems has also had the desirable effect of adding to our understandings of software engineering. These understandings have created a software engineering research agenda that is complementary to, and synergistic with, agendas for applying software engineering to system development and improvement.
Inquiry-Based Requirements Analysis This approach emphasizes pinpointing where and when information needs occur; at its core is the inquiry cycle model, a structure for describing and supporting discussions about system requirements. The authors use a case study to describe the model's conversation metaphor, which follows analysis activities from requirements elicitation and documentation through refinement.
Requirements definition and its interface to the SARA design methodology for computer-based systems This paper presents results of efforts during 1979--1981 to integrate and enhance the work of the System ARchitects Apprentice (SARA) Project at UCLA and the Information System Design Optimization System (ISDOS) Project at the University of Michigan. While expressing a need for a requirements definition subsystem, SARA had no appropriate requirements definition language, no defined set of requirements analysis techniques or tools, and no procedures to form a more cohesive methodology for linking computer system requirements to the ensuing design. Research has been performed to fill this requirements subsystem gap, using concepts derived from the ISDOS project as a basis for departure.
One VM to rule them all Building high-performance virtual machines is a complex and expensive undertaking; many popular languages still have low-performance implementations. We describe a new approach to virtual machine (VM) construction that amortizes much of the effort in initial construction by allowing new languages to be implemented with modest additional effort. The approach relies on abstract syntax tree (AST) interpretation where a node can rewrite itself to a more specialized or more general node, together with an optimizing compiler that exploits the structure of the interpreter. The compiler uses speculative assumptions and deoptimization in order to produce efficient machine code. Our initial experience suggests that high performance is attainable while preserving a modular and layered architecture, and that new high-performance language implementations can be obtained by writing little more than a stylized interpreter.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2046
0.2046
0.018615
0.008529
0.001538
0.000056
0.000019
0.000005
0.000001
0
0
0
0
0
H-Infinity Control Problem Of Linear Periodic Piecewise Time-Delay Systems This paper investigates the H control problem based on exponential stability and weighted L-2-gain analyses for a class of continuous-time linear periodic piecewise systems with time delay. A periodic piecewise Lyapunov-Krasovskii functional is developed by integrating a discontinuous time-varying matrix function with two global terms. By applying the improved constraints to the stability and L-2-gain analyses, sufficient delay-dependent exponential stability and weighted L-2-gain criteria are proposed for the periodic piecewise time-delay system. Based on these analyses, an H control scheme is designed under the considerations of periodic state feedback control input and iterative optimisation. Finally, numerical examples are presented to illustrate the effectiveness of our proposed conditions.
Fuzzy-model-based admissibility analysis and output feedback control for nonlinear discrete-time systems with time-varying delay. This paper is concerned with the admissibility analysis and stabilization problems for singular fuzzy discrete-time systems with time-varying delay. The novelty of this paper comes from the consideration of a new summation inequality which is less conservative than the usual Jensen inequality, the Abel-Lemma based inequality and the Seuret inequality. Based on the inequality, sufficient conditions are established to ensure the systems to be admissible. Moreover, the corresponding conditions for the existence of desired static output feedback controller gains are derived to guarantee that the closed-loop system is admissible. The conditions can be solved by a modified cone complementarity linearization (CCL) algorithm. Examples are given to show the effectiveness of the proposed method.
Stability analysis of Lur'e systems with additive delay components via a relaxed matrix inequality. This paper is concerned with the stability analysis of Lure systems with sector-bounded nonlinearity and two additive time-varying delay components. In order to accurately understand the effect of time delays on the system stability, the extended matrix inequality for estimating the derivative of the LyapunovKrasovskii functionals (LKFs) is employed to achieve the conservatism reduction of stability criteria. It reduces estimation gap of the popular reciprocally convex combination lemma (RCCL). Combining the extended matrix inequality and two types of LKFs lead to several stability criteria, which are less conservative than the RCCL-based criteria under the same LKFs. Finally, the advantages of the proposed criteria are demonstrated through two examples.
Multiple integral inequalities and stability analysis of time delay systems. This paper is devoted to stability analysis of continuous-time delay systems based on a set of Lyapunov–Krasovskii functionals. New multiple integral inequalities are derived that involve the famous Jensen’s and Wirtinger’s inequalities, as well as the recently presented Bessel–Legendre inequalities of Seuret and Gouaisbaut (2015) and the Wirtinger-based multiple-integral inequalities of Park et al. (2015) and Lee et al. (2015). The present paper aims at showing that the proposed set of sufficient stability conditions can be arranged into a bidirectional hierarchy of LMIs establishing a rigorous theoretical basis for comparison of conservatism of the investigated methods. Numerical examples illustrate the efficiency of the method.
Robust delay-depent stability criteria for uncertain neural networks with two additive time-varying delay components. This paper considers the problem of robust stability of uncertain neural networks with two additive time varying delay components. The activation functions are monotone nondecreasing with known lower and upper bounds. By constructing of a modified augmented Lyapunov function, some new stability criteria are established in terms of linear matrix inequalities, which is easily solved by various convex optimization techniques. Compared with the existing works, the obtained criteria are less conservative due to reciprocal convex technique and an improved inequality, which provides more accurate upper bound than Jensen inequality for dealing with the cross-term. Finally, two numerical examples are given to illustrate the effectiveness of the proposed method.
New stability and stabilization conditions for T-S fuzzy systems with time delay This paper is concerned with the problem of the stability analysis and stabilization for Takagi-Sugeno (T-S) fuzzy systems with time delay. A new Lyapunov-Krasovskii functional containing the fuzzy line-integral Lyapunov function and the simple functional is chosen. By using a recently developed Wirtinger-based integral inequality and introducing slack variables, less conservative conditions in terms of linear matrix inequalities (LMIs) are derived. Several examples are given to show the advantages of the proposed results.
Stability Analysis of Sampled-Data Systems via Free-Matrix-Based Time-Dependent Discontinuous Lyapunov Approach. In this paper, a new time-dependent discontinuous Lyapunov functional, namely, free-matrix-based time-dependent discontinuous (FMBTDD) Lyapunov functional is introduced for stability analysis of sampled-data systems. First, a modified free-matrix-based integral inequality (MFMBII) is derived based on the existing free-matrix-based integral inequality [1] and it is applied to develop a stability criterion for sampled-data systems. And then, inspired by MFMBII, FMBTDD term is established that leads to efficient stability conditions. Four numerical examples are given to demonstrate the effectiveness of the proposed methods.
Stability Analysis of Distributed Delay Neural Networks Based on Relaxed Lyapunov-Krasovskii Functionals. This paper revisits the problem of asymptotic stability analysis for neural networks with distributed delays. The distributed delays are assumed to be constant and prescribed. Since a positive-definite quadratic functional does not necessarily require all the involved symmetric matrices to be positive definite, it is important for constructing relaxed Lyapunov-Krasovskii functionals, which generally lead to less conservative stability criteria. Based on this fact and using two kinds of integral inequalities, a new delay-dependent condition is obtained, which ensures that the distributed delay neural network under consideration is globally asymptotically stable. This stability criterion is then improved by applying the delay partitioning technique. Two numerical examples are provided to demonstrate the advantage of the presented stability criteria.
Integrating noninterfering versions of programs The need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. To date, the only available tools for assisting with program integration are variants of text-based differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated.This paper concerns the design of a semantics-based tool for automatically integrating program versions. The main contribution of the paper is an algorithm that takes as input three programs A, B, and Base, where A and B are two variants of Base. Whenever the changes made to Base to create A and B do not “interfere” (in a sense defined in the paper), the algorithm produces a program M that integrates A and B. The algorithm is predicated on the assumption that differences in the behavior of the variant programs from that of Base, rather than differences in the text, are significant and must be preserved in M. Although it is undecidable whether a program modification actually leads to such a difference, it is possible to determine a safe approximation by comparing each of the variants with Base. To determine this information, the integration algorithm employs a program representation that is similar (although not identical) to the dependence graphs that have been used previously in vectorizing and parallelizing compilers. The algorithm also makes use of the notion of a program slice to find just those statements of a program that determine the values of potentially affected variables.The program-integration problem has not been formalized previously. It should be noted, however, that the integration problem examined here is a greatly simplified one; in particular, we assume that expressions contain only scalar variables and constants, and that the only statements used in programs are assignment statements, conditional statements, and while-loops.
Using emoticons to reduce dependency in machine learning techniques for sentiment classification Sentiment Classification seeks to identify a piece of text according to its author's general feeling toward their subject, be it positive or negative. Traditional machine learning techniques have been applied to this problem with reasonable success, but they have been shown to work well only when there is a good match between the training and test data with respect to topic. This paper demonstrates that match with respect to domain and time is also important, and presents preliminary experiments with training data labeled with emoticons, which has the potential of being independent of domain, topic and time.
From object-oriented to goal-oriented requirements analysis
A constructive approach to the design of distributed systems The underlying model of distributed systems is that of loosely coupled components r running in parallel and communicating by message passing. Description, construction and evolution of these systems is facilitated by separating the system structure, as a set of components and their interconnections, from the functional description of individual component behaviour. Furthermore, component reuse and structuring flexibility is enhanced if components are context independent ie. self- contained with a well defined interface for component interaction. The Conic environment for distributed programming supports this model. In particular, Conic provides a separate configuration language for the description, construction and evolution of distributed systems. The Conic environment has demonstrated a working environment which supports system distribution, r reconfiguration and extension. We had initially supposed that Conic might pose difficult challenges for us as software designers. For example, what design techniques should we employ to develop a system that exploits the Conic facilities? In fact we have experienced quite the opposite. The principles of explicit system structure and context independent components that underlie Conic have lead us naturally to a design approach which differs from that of both current industrial practice and current research. Our approach is termed "constructive" since it emphasises the satisfaction of system requirements by composition of components. In this paper we describe the approach and illustrate its use by application to an example, a model airport shuttle system which has been implemented in Conic.
Parallel image normalization on a mesh connected array processor Image normalization is a basic operation in various image processing tasks. A parallel algorithm for fast binary image normalization is proposed for a mesh connected array processor. The principal operation in this algorithm is pixel mapping. The basic idea of parallel pixel mapping is to utilize a store and forward mechanism which routes pixels from their source locations to destinations in parallel along the paths of minimum length. The routing is based on a simple yet powerful concept of flow control patterns . This can form the basis for designing other parallel algorithms for low level image processing. The normalization process is decomposed into three procedures: translation, rotation and scaling. In each procedure, a mapping algorithm is employed to route the object pixels from source locations to destinations. Simulation results for the parallel image normalization on generated images are provided.
Robust passivity analysis for neutral-type neural networks with mixed and leakage delays. This paper investigates the problem of passivity of neutral-type neural networks with mixed and leakage delays. By establishing a suitable augmented Lyapunov functional and combining a new integral inequality with the reciprocally convex combination technique, we obtain some sufficient passivity conditions, which are formulated in terms of linear matrix inequalities (LMIs). Here, some useful information on the neuron activation function ignored in the existing literature is taken into account. Finally, some numerical examples are given to demonstrate the effectiveness of the proposed method.
1.248
0.082667
0.001714
0.001125
0.000803
0.00049
0.000235
0.000078
0
0
0
0
0
0
The programming language Z-- Z is a specification language, and, rightly, not in general executable. Z-- is a programming language superficially identical to Z, but using only those forms of expressions and predicate which are immediately executable. The Z-- approach differs from other Z animations in being single-pass, without backtracking, and in modelling a set as its membership test, thus imposing no general restriction to finiteness. A prototype Z-- interpreter is described.
The role of knowledge in software development Software development is knowledge-intensive. Many concepts have been developed to ease or guide the processing of knowledge in software development, including information hiding, modularity, objects, functions and procedures, patterns, and more. These concepts are supported by various methods, approaches, and tools using symbols, graphics, and languages. Some are formal; others are semiformal or simply made up of key practices. Methods and approaches in software engineering are often based on the results of empirical observations or on individual success stories.
Organizing usability work to fit the full product range
Knowledge Representation And Reasoning In Software Engineering It has been widely recognized that in order to solve difficult problems using computers one will usually have to use a great deal of knowledge (often domain specific), rather than a few general principles. The intent of this special issue was to study how this attitude has affected research on tools for improved software productivity and quality. Many such tools and problems related to them were discussed at a Workshop on the Development of Intelligent and Cooperative Information Systems, held in Niagara-on-the-Lake in April 1991, from which the idea for this issue originated.
Specifying Reactive Systems in B AMN This paper describes techniques for specifying and designing reactive systems in the B Abstract Machine (AMN) language, using concepts from procedural process control. In addition, we consider what forms of concurrent extensions to B AMN would make it more effective in representing such systems.
Viewpoint Consistency in Z and LOTOS: A Case Study . Specification by viewpoints is advocated as a suitable methodof specifying complex systems. Each viewpoint describes the envisagedsystem from a particular perspective, using concepts and specificationlanguages best suited for that perspective.Inherent in any viewpoint approach is the need to check or manage theconsistency of viewpoints and to show that the different viewpoints donot impose contradictory requirements. In previous work we have describeda range of techniques for...
Formal Methods Applied to a Floating-Point Number System A formalization of the IEEE standard for binary floating-point arithmetic (ANSI/IEEE Std. 754-1985) is presented in the set-theoretic specification language Z. The formal specification is refined into four sequential components, which unpack the operands, perform the arithmetic, and pack and round the result. This refinement follows proven rules and so demonstrates a mathematically rigorous method of program development. In the course of the proofs, useful internal representations of floating-point numbers are specified. The procedures presented form the basis for the floating-point unit of the Inmos IMS T800 transputer.
Knowledge management and its link to artificial intelligence Knowledge management is an emerging area which is gaining interest by both industry and government. As we move toward building knowledge organizations, knowledge management will play a fundamental role towards the success of transforming individual knowledge into organizational knowledge. One of the key building blocks for developing and advancing this field of knowledge management is artificial intelligence, which many knowledge management practitioners and theorists are overlooking. This paper will discuss the emergence and future of knowledge management, and its link to artificial intelligence.
Risk, gap and strength: key concepts in knowledge management This paper argues that there are certain concepts within the general domain of Knowledge Management that have not been fully explored. The discipline will benefit from a more detailed look at some of these concepts. The concepts of risk, gap and strength are the particular concepts that are explored in some more detail within this paper. A reason for describing these elements as concepts rather than terms is discussed. More precise definitions for the concepts described can provide management support about the knowledge resource in decision-making. Several function definitions for risk, gap and strength are offered. Finally, the paper considers how these concepts can influence organisational knowledge management schemes.
Operational Requirements Accommodation in Distributed System Design Operational requirements are qualities which influence a software system's entire development cycle. The investigation reported here concentrated on three of the most important operational requirements: reliability via fault tolerance, growth, and availability. Accommodation of these requirements is based on an approach to functional decomposition involving representation in terms of potentiafly independent processors, called virtual machines. Functional requirements may be accommodated through hierarchical decomposition of virtual machines, while performance requirements may be associated with individual virtual machines. Virtual machines may then be mapped to a representation of a confilguration of physical resources, so that performance requirements may be reconciled with available performance characteristics.
Research on Knowledge-Based Software Environments at Kestrel Institute We present a summary of the CHI project conducted at Kestrel Institute through mid-1984. The objective of this project was to perform research on knowledge-based software environments. Toward this end, key portions of a prototype environment, called CHI, were built that established the feasibility of this approach. One result of this research was the development of a wide-spectrum language that could be used to express all stages of the program development process in the system. Another result was that the prototype compiler was used to synthesize itself from very-high-level description of itself. In this way the system was bootstrapped. We describe the overall nature of the work done on this project, give highlights of implemented prototypes, and describe the implications that this work suggests for the future of software engineering. In addition to this historical perspective, current research projects at Kestrel Institute as well as commercial applications of the technology at Reasoning Systems are briefly surveyed.
Prototyping interactive information systems Applying prototype-oriented development processes to computerized application systems significantly improves the likelihood that useful systems will be developed and that the overall development cycle will be shortened. The prototype development methodology and development tool presented here have been widely applied to the development of interactive information systems in the commercial data processing setting. The effectiveness and relationship to other applications is discussed.
On ternary square-free circular words Circular words are cyclically ordered finite sequences of letters. We give a computer-free proof of the following result by Currie: square-free circular words over the ternary alphabet exist for all lengths l except for 5, 7, 9, 10, 14, and 17. Our proof reveals an interesting connection between ternary square-free circular words and closed walks in the K(3,3) graph. In addition, our proof implies an exponential lower bound on the number of such circular words of length l and allows one to list all lengths l for which such a circular word is unique up to isomorphism.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.105014
0.110029
0.110029
0.110029
0.1
0.05
0.016667
0.000842
0.000004
0
0
0
0
0
Scanning and prediction in multidimensional data arrays The problem of sequentially scanning and predicting data arranged in a multidimensional array is considered. We introduce the notion of a scandictor, which is any scheme for the sequential scanning and prediction of such multidimensional data. The scandictability of any finite (probabilistic) data array is defined as the best achievable expected "scandiction" performance on that array. The scandictability of any (spatially) stationary random field on Zm is defined as the limit of its scandictability on finite "boxes" (subsets of Zm), as their edges become large. The limit is shown to exist for any stationary field, and essentially be independent of the ratios between the box dimensions. Fundamental limitations on scandiction performance in both the probabilistic and the deterministic settings are characterized for the family of difference loss functions. We find that any stochastic process or random field that can be generated autoregressively with a maximum-entropy innovation process is optimally "scandicted" the way it was generated. These results are specialized for cases of particular interest. The scandictability of any stationary Gaussian field under the squared-error loss function is given a single-letter expression in terms of its spectral measure and is shown to be attained by the raster scan. For a family of binary Markov random fields (MRFs), the scandictability under the Hamming distortion measure is fully characterized.
Universal coding, information, prediction, and estimation A connection between universal codes and the problems of prediction and statistical estimation is established. A known lower bound for the mean length of universal codes is sharpened and generalized, and optimum universal codes constructed. The bound is defined to give the information in strings relative to the considered class of processes. The earlier derived minimum description length criterion for estimation of parameters, including their number, is given a fundamental information, theoretic justification by showing that its estimators achieve the information in the strings. It is also shown that one cannot do prediction in Gaussian autoregressive moving average (ARMA) processes below a bound, which is determined by the information in the data.
The LOCO-I lossless image compression algorithm: principles and standardization into JPEG-LS LOCO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. It is conceived as a “low complexity projection” of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm “enjoys the best of both worlds.” It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golomb (1966) type codes, which are adaptively chosen, and an embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCO-I, and its standardization into JPEC-LS
Fast Constant Division Routines When there is no division circuit available, the arithmetical function of division is normally performed by a library subroutine. The library subroutine normally allows both the divisor and the dividend to be variables, and requires the execution of hundreds of assembly instructions. This correspondence provides a fast algorithm for performing the integer division of a variable by a predetermined divisor. Based upon this algorithm, an efficient division routine has been constructed for each odd divisor up to 55. These routines may be implemented in assembly languages, in microcodes, and in special-purpose circuits.
SICLIC: A Simple Inter-Color Lossless Image Coder Many applications require high quality color images. In order to alleviate storage space and transmission time, while preserving high quality, these images are losslessly compressed. Most of the image compression algorithms treat the color image, usually in RGB format, as a set of independent gray scale images. SICLIC is a novel inter-color coding algorithm based on a LOCO-like algorithm. It combines the simplicity of Golomb-Rice coding with the potential of context models, in both intra-color and inter-color encoding. It also supports intra-color and inter-color alphabet extension, in order to reduce the redundancy of code. SICLIC attains compression ratios superior to those obtained with most of the state-of-the-art compression algorithms and achieves compression ratios very close to those of Inter-Band CALIC, with much lower complexity. With arithmetic coding, SICLIC attains better compression than Inter-Band CALIC.
A universal finite memory source An irreducible parameterization for a finite memory source is constructed in the form of a tree machine. A universal information source for the set of finite memory sources is constructed by a predictive modification of an earlier studied algorithm-Context. It is shown that this universal source incorporates any minimal data-generating tree machine in an asymptotically optimal manner in the following sense: the negative logarithm of the probability it assigns to any long typical sequence, generated by any tree machine, approaches that assigned by the tree machine at the best possible rate
Spectral and spatial decorrelation of Landsat-TM data for lossless compression Presents some new techniques of spectral and spatial decorrelation in lossless data compression of remotely sensed imagery. These techniques provide methods to efficiently compute the optimal band combination and band ordering based on the statistical properties of Landsat-TM data. Experiments on several Landsat-TM images show that using both the spectral and the spatial nature of the remotely sensed data results in significant improvement over spatial decorrelation alone. These techniques result in higher compression ratios and are computationally inexpensive
The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research.
Time-delay systems: an overview of some recent advances and open problems After presenting some motivations for the study of time-delay system, this paper recalls modifications (models, stability, structure) arising from the presence of the delay phenomenon. A brief overview of some control approaches is then provided, the sliding mode and time-delay controls in particular. Lastly, some open problems are discussed: the constructive use of the delayed inputs, the digital implementation of distributed delays, the control via the delay, and the handling of information related to the delay value.
A Study of The Fragile Base Class Problem In this paper we study the fragile base class problem. This problem occurs in open object-oriented systems employing code inher- itance as an implementation reuse mechanism. System developers un- aware of extensions to the system developed by its users may produce a seemingly acceptable revision of a base class which may damage its exten- sions. The fragile base class problem becomes apparent during mainte- nance of open object-oriented systems, but requires consideration during design. We express the fragile base class problem in terms of a flexibility property. By means of ve orthogonal examples, violating the flexibility property, we demonstrate dierent aspects of the problem. We formulate requirements for disciplining inheritance, and extend the renement cal- culus to accommodate for classes, objects, class-based inheritance, and class renement. We formulate and formally prove a flexibility theorem demonstrating that the restrictions we impose on inheritance are suf- cient to permit safe substitution of a base class with its revision in presence of extension classes.
Reflection and semantics in LISP
Expressing the relationships between multiple views in requirements specification The authors generalize and formalize the definition of a ViewPoint to facilitate its manipulation for composite system development. A ViewPoint is defined to be a loosely-coupled, locally managed object encapsulating representation knowledge, development process knowledge and partial specification knowledge about a system and its domain. In attempting to integrate multiple requirements specification ViewPoints, overlaps must be identified and expressed, complementary participants made to interact and cooperate, and contradictions resolved. The notion of inter-ViewPoint communication is addressed as a vehicle for ViewPoint integration. The communication model presented straddles both the method construction stage during which inter-ViewPoint relationships are expressed, and the method application stage during which these relationships are enacted
A Tool For Task-Based Knowledge And Specification Acquisition Knowledge acquisition has been identified as the bottleneck for knowledge engineering. One of the reasons is the lack of an integrated methodology that is able to provide tools and guidelines for the elicitation of knowledge as well as the verification and validation of the system developed. Even though methods that address this issue have been proposed, they only loosely relate knowledge acquisition to the remaining part of the software development fife cycle. To alleviate this problem, we have developed a framework in which knowledge acquisition is integrated with system specifications to facilitate the verification, validation, and testing of the prototypes as well as the final implementation. To support the framework, we have developed a knowledge acquisition tool, TAME. It provides an integrated environment to acquire and generate specifications about the functionality and behavior of the target system, and the representation of the domain knowledge and domain heuristics. The tool and the framework, together, can thus enhance the verification, validation, and the maintenance of expert systems through their life cycles. (C) 1994 John Wiley & Sons, Inc.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.105262
0.007371
0.000346
0.000035
0.000012
0.000006
0
0
0
0
0
0
0
0
Relational Demonic Fuzzy Refinement. We use relational algebra to define a refinement fuzzy order called demonic fuzzy refinement and also the associated fuzzy operators which are fuzzy demonic join (square(fuz)), fuzzy demonic meet (square(fuz)), and fuzzy demonic composition (square(fuz)). Our definitions and properties are illustrated by some examples using mathematica software (fuzzy logic).
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Distributed consensus filtering for discrete-time nonlinear systems with non-Gaussian noise This paper studies the problem of distributed estimation for a class of discrete-time nonlinear non-Gaussian systems in a not fully connected sensor network environment. The non-Gaussian process noise and measurement noise are approximated by finite Gaussian mixture models. A distributed Gaussian mixture unscented Kalman filter (UKF) is developed in which each sensor node independently calculates local statistics by using its own measurement and an average-consensus filter is utilized to diffuse local statistics to its neighbors. A main difficulty encountered is the distributed computation of the Gaussian mixture weights, which is overcome by introducing the natural logarithm transformation. The effectiveness of the proposed distributed filter is verified via a simulation example involving tracking a target in the presence of glint noise.
Consensus-based algorithms for distributed filtering The paper addresses Distributed State Estimation (DSE) over sensor networks. Two existing consensus approaches for DSE of linear systems, named consensus on information (CI) and consensus on measurements (CM), are extended to nonlinear systems. Further, a novel hybrid consensus approach exploiting both CM and CI (named HCMCI=Hybrid CM + CI) is introduced in order to combine their complementary benefits. Novel theoretical results, limitedly to linear systems, on the guaranteed stability of the HCMCI filter under minimal requirements (i.e. collective observability and network connectivity) are proved. Finally, a simulation case-study is presented in order to comparatively show the effectiveness of the proposed consensus-based state estimators.
The extended Kalman filter as an exponential observer for nonlinear systems In this correspondence, we analyze the behavior of the extended Kalman filter as a state estimator for nonlinear deterministic systems. Using the direct method of Lyapunov, we prove that under certain conditions, the extended Kalman filter is an exponential observer, i.e., the dynamics of the estimation error is exponentially stable, Furthermore, rr-e discuss a generalization of the Kalman filter with exponential data weighting to nonlinear systems.
Distributed cubature information filtering based on weighted average consensus. In this paper, the distributed state estimation (DSE) problem for a class of discrete-time nonlinear systems over sensor networks is investigated. First, based on weighted average consensus, a new DSE algorithm named distributed cubature information filtering (DCIF) algorithm is developed to address the high-dimensional nonlinear DSE problem. The proposed filtering algorithm not only has such advantages as easy initialization and less computation burden, but also possesses the guaranteed stability regardless of consensus steps. Moreover, it is proved that the corresponding estimation is consistent, and its mean-squared estimation errors are exponentially bounded. Finally, numerical simulations are given to verify the effectiveness of DCIF.
A scheme for robust distributed sensor fusion based on average consensus We consider a network of distributed sensors, where each sensor takes a linear measurement of some unknown parameters, corrupted by independent Gaussian noises. We propose a simple distributed iterative scheme, based on distributed average consensus in the network, to compute the maximum-likelihood estimate of the parameters. This scheme doesn't involve explicit point-to-point message passing or routing; instead, it diffuses information across the network by updating each node's data with a weighted average of its neighbors' data (they maintain the same data structure). At each step, every node can compute a local weighted least-squares estimate, which converges to the global maximum-likelihood solution. This scheme is robust to unreliable communication links. We show that it works in a network with dynamically changing topology, provided that the infinitely occurring communication graphs are jointly connected.
Parallel Consensus on Likelihoods and Priors for Networked Nonlinear Filtering. A novel consensus approach to networked nonlinear filtering is introduced. The proposed approach is based on the idea of carrying out in parallel a consensus on likelihoods and a consensus on prior probability distributions and then combine the outcomes with a suitable weighting factor. Simulation experiments concerning a target tracking case-study show that the proposed consensus-based nonlinear ...
Extended Kalman Filter Based Learning Algorithm for Type-2 Fuzzy Logic Systems and Its Experimental Evaluation. In this paper, the use of extended Kalman filter for the optimization of the parameters of type-2 fuzzy logic systems is proposed. The type-2 fuzzy logic system considered in this study benefits from a novel type-2 fuzzy membership function which has certain values on both ends of the support and the kernel, and uncertain values on other parts of the support. To have a comparison of the extended K...
Robust H∞ control of Takagi--Sugeno fuzzy systems with state and input time delays This paper addresses the robust H"~ fuzzy control problem for nonlinear uncertain systems with state and input time delays through Takagi-Sugeno (T-S) fuzzy model approach. The delays are assumed to be interval time-varying delays, and no restriction is imposed on the derivative of time delay. Based on Lyapunov-Krasoviskii functional method, delay-dependent sufficient conditions for the existence of an H"~ controller are proposed in linear matrix inequality (LMI) format. Illustrative examples are given to show the effectiveness and merits of the proposed fuzzy controller design methodology.
Wirtinger's inequality and Lyapunov-based sampled-data stabilization. Discontinuous Lyapunov functionals appeared to be very efficient for sampled-data systems (Fridman, 2010, Naghshtabrizi et al., 2008). In the present paper, new discontinuous Lyapunov functionals are introduced for sampled-data control in the presence of a constant input delay. The construction of these functionals is based on the vector extension of Wirtinger’s inequality. These functionals lead to simplified and efficient stability conditions in terms of Linear Matrix Inequalities (LMIs). The new stability analysis is applied to sampled-data state-feedback stabilization and to a novel sampled-data static output-feedback problem, where the delayed measurements are used for stabilization.
Viewpoints: principles, problems and a practical approach to requirements engineering The paper includes a survey and discussion of viewpoint&dash;oriented approaches to requirements engineering and a presentation of new work in this area which has been designed with practical application in mind. We describe the benefits of viewpoint&dash;oriented requirements engineering and describe the strengths and weaknesses of a number of viewpoint&dash;oriented methods. We discuss the practical problems of introducing viewpoint&dash;oriented requirements engineering into industrial software engineering practice and why these have prevented the widespread use of existing approaches. We then introduce a new model of viewpoints called Preview. Preview viewpoints are flexible, generic entities which can be used in different ways and in different application domains. We describe the novel characteristics of the Preview viewpoints model and the associated processes of requirements discovery, analysis and negotiation. Finally, we discuss how well this approach addresses some outstanding problems in requirements engineering (RE) and the practical industrial problems of introducing new requirements engineering methods.
The interdisciplinary study of coordination This survey characterizes an emerging research area, sometimes called coordination theory, that focuses on the interdisciplinary study of coordination. Research in this area uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology.A key insight of the framework presented here is that coordination can be seen as the process of managing dependencies among activities. Further progress, therefore, should be possible by characterizing different kinds of dependencies and identifying the coordination processes that can be used to manage them. A variety of processes are analyzed from this perspective, and commonalities across disciplines are identified. Processes analyzed include those for managing shared resources, producer/consumer relationships, simultaneity constraints, and task/subtask dependencies.Section 3 summarizes ways of applying a coordination perspective in three different domains:(1) understanding the effects of information technology on human organizations and markets, (2) designing cooperative work tools, and (3) designing distributed and parallel computer systems. In the final section, elements of a research agenda in this new area are briefly outlined.
Integrating Action Systems and Z in a Medical System Specification This paper reports on work carried out on formal specification of a computerbasedsystem that is used to train the reaction abilities of patients with severebrain damage. The system contains computer programs by which the patientscarry out different tests that are designed to stimulate their eyes and ears. Systemsof this type are new and no formal specifications for them exists to ourknowledge. The system specified here is developed together with the neurologicalclinic of a Finnish...
Addressing degraded service outcomes and exceptional modes of operation in behavioural models A dependable software system should attempt to at least partially satisfy user goals if full service provision is impossible due to an exceptional situation. In addition, a dependable system should evaluate the effects of the exceptional situation on future service provision and adjust the set of services it promises to deliver accordingly. In this paper we show how to express degraded service outcomes and exceptional modes of operation in behavioural models, i.e. use cases, activity diagrams and state charts. We also outline how to integrate the task of discovering and defining degraded outcomes and exceptional modes of operation into a requirements engineering process by presenting the relevant parts of our dependability-focused requirements engineering process DREP.
On backwards and forwards reachable sets bounding for perturbed time-delay systems Linear systems with interval time-varying delay and unknown-but-bounded disturbances are considered in this paper. We study the problem of finding outer bound of forwards reachable sets and inter bound of backwards reachable sets of the system. Firstly, two definitions on forwards and backwards reachable sets, where initial state vectors are not necessary to be equal to zero, are introduced. Then, by using the Lyapunov-Krasovskii method, two sufficient conditions for the existence of: (i) the smallest possible outer bound of forwards reachable sets; and (ii) the largest possible inter bound of backwards reachable sets, are derived. These conditions are presented in terms of linear matrix inequalities with two parameters need to tuned, which therefore can be efficiently solved by combining existing convex optimization algorithms with a two-dimensional search method to obtain optimal bounds. Lastly, the obtained results are illustrated by four numerical examples.
1.029377
0.029143
0.029045
0.015936
0.01131
0.008402
0.000006
0.000001
0
0
0
0
0
0
Aperiodic Sampled-Data-Based Control for Interval Type-2 Fuzzy Systems via Refined Adaptive Event-Triggered Communication Scheme This article is devoted to event-triggered stabilization for a class of interval type-2 (IT2) fuzzy systems with aperiodic sampling. First, the IT2 Takagi-Sugeno fuzzy model and sampled-data controllers are established subject to mismatched membership functions. Second, considering a nonuniform sampling case, a refined adaptive event-triggered communication scheme is proposed in a hierarchy form to dynamically adjust the direction and rate of the event-triggered threshold parameter by state changing trend and relative state error, respectively. Thus, a complete dual-directional regulating mechanism with sensitivity to state variation is reasonably created to give extra flexibility, which is beneficial for a preferable tradeoff between control performance and network resource. Third, considering the practical behaviors on the sampling interval, a novel integral type of time-dependent Lyapunov function is constructed. Then, the stability criterion and the controller design approach are derived. Finally, the numerical examples are provided to demonstrate the effectiveness and advantages of the proposed methods.
A novel stability analysis of linear systems under asynchronous samplings. This article proposes a novel approach to assess the stability of continuous linear systems with sampled-data inputs. The method, which is based on the discrete-time Lyapunov theorem, provides easy tractable stability conditions for the continuous-time model. Sufficient conditions for asymptotic and exponential stability are provided dealing with synchronous and asynchronous samplings and uncertain systems. An additional stability analysis is provided for the cases of multiple sampling periods and packet losses. Several examples show the efficiency of the method.
Improved delay-range-dependent stability criteria for linear systems with time-varying delays This paper is concerned with the stability analysis of linear systems with time-varying delays in a given range. A new type of augmented Lyapunov functional is proposed which contains some triple-integral terms. In the proposed Lyapunov functional, the information on the lower bound of the delay is fully exploited. Some new stability criteria are derived in terms of linear matrix inequalities without introducing any free-weighting matrices. Numerical examples are given to illustrate the effectiveness of the proposed method.
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:[email protected]), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled...
A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
Inheritance of proofs The Curry-Howard isomorphism, a fundamental property shared by many type theories, establishes a direct correspondence between programs and proofs. This suggests that the same structuring principles that ease programming should be useful for proving as well. To exploit object-oriented structuring mechanisms for verification, we extend the object-model of Pierce and Turner, based on the higher-order typed X-calculus F less than or equal to(omega), with a logical component. By enriching the (functional) signature of objects with a specification, methods and their correctness proofs are packed together in objects. The uniform treatment of methods and proofs gives rise in a natural way to object-oriented proving principles - including inheritance of proofs, late binding of proofs, and encapsulation of proofs - as analogues to object-oriented programming principles. We have used Lego, a type-theoretic proof checker, to explore the feasibility of this approach. (C) 1998 John Wiley & Sons, Inc.
Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered.
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.2
0.014286
0.006452
0
0
0
0
0
0
0
0
0
0
0
Discrete techniques for 3-D digital images and patterns undertransformation Three-dimensional (3-D) digital images and patterns under transformations are facilitated by the splitting-shooting method (SSM) and the splitting-integration method (SIM). The combination (CSIM) of using both SSM and SIM and two combinations (CIIM) of using SIM only are proposed for a cycle conversion T-1T, where T is a nonlinear transformation, and T-1 is its inverse transformation. This paper focuses on exploitation of accuracy of obtained image greyness. In our discrete algorithms, letting a 3-D pixel be split into N3 subpixels, the convergence rates, O(1/N), O(1/N2), and O(1/N 3), of sequential error can be achieved by the three combinations respectively. High convergence rates indicate less CPU time needed. Both error bounds and computation of pixel greyness have shown the significance of the proposed new algorithms
Discrete Techniques For 3-D Digital Images And Patterns Under Transformation Three-dimensional (3-D) digital images and patterns under transformations are facilitated by the splitting- shooting method (SSM) and the splitting- integration method (SIM), The combination (CSIM) of using both SSM and SIM and two combinations (CIIM) of using SIM only are proposed for a cycle conversion T-1T, where T is a nonlinear transformation, and T-1 is its inverse transformation. This paper focuses on exploitation of accuracy of obtained image greyness. In our discrete algorithms, letting a 3-D pixel be split into N-3 subpixels, the convergence rates, O(1/N), O(1/N-2), and O(1/N-3); of sequential error can be achieved by the three combinations respectively. High convergence rates indicate less CPU time needed. Both error bounds and computation of pixel greyness have shown the significance of the proposed new algorithms.
Continuous normalized convolution The problem of signal estimation for sparsely and irregularly sampled signals is dealt with using continuous normalized convolution. Image values on real-valued positions are estimated using integration of signals and certainties over a neighbourhood employing a local model of both the signal and the used discrete filters. The result of the approach is that an output sample close to signals with high certainty is interpolated using a small neighbourhood. An output sample close to signals with low certainty is spatially predicted from signals in a large neighbourhood.
A comparative study of nonlinear shape models for digital image processing and pattern recognition Four nonlinear shape models are presented: polynomial, Coons, perspective, and projective modes. Algorithms and some properties of these models are provided. For a given physical model, such as a perspective model, comparisons are made with other mathematical models. It is proved that, under certain conditions, the perspective models can be replaced by the Coons models. Problems related to substitution and approximation of practical models that facilitate digital image processing are raised and discussed. Experimental results on digital images are presented
Splitting-Shooting Methods for Nonlinear Transformations of Digitized Patterns New splitting-shooting methods are presented for nonlinear transformations T: ( xi , eta ) to (x,y) where x=x( xi , eta ), y=y( xi , eta ). These transformations are important in computer vision, image processing, pattern recognition, and shape transformations in computer graphics. The methods can eliminate superfluous holes or blanks, leading to better images while requiring only modest computer storage and CPU time. The implementation of the proposed algorithms is simple and straightforward. Moreover, these methods can be extended to images with gray levels, to color images, and to three dimensions. They can also be implemented on parallel computers or VLSI circuits. A theoretical analysis proving the convergence of the algorithms and providing error bounds for the resulting images is presented. The complexity of the algorithms is linear. Graphical and numerical experiments are presented to verify the analytical results and to demonstrate the effectiveness of the methods.
Effectiveness of exhaustive search and template matching against watermark desynchronization By focusing on a simple example, we investigate the effectiveness of exhaustive watermark detection and resynchronization through template matching against watermark desynchronization. We find that if the size of the search space does not increase exponentially, both methods provide asymptotically good results. We also show that the exhaustive search approach outperforms template matching from the point of view of reliable detection.
Circularly orthogonal moments for geometrically robust image watermarking Circularly orthogonal moments, such as Zernike moments (ZMs) and pseudo-Zernike moments (PZMs), have attracted attention due to their invariance properties. However, we find that for digital images, the invariance properties of some ZMs/PZMs are not perfectly valid. This is significant for applications of ZMs/PZMs. By distinguishing between the 'good' and 'bad' ZMs/PZMs in terms of their invariance properties, we design image watermarks with 'good' ZMs/PZMs to achieve watermark's robustness to geometric distortions, which has been considered a crucial and difficult issue in the research of digital watermarking. Simulation results show that the embedded information can be decoded at low error rates, robust against image rotation, scaling, flipping, as well as a variety of other common manipulations such as lossy compression, additive noise and lowpass filtering.
Efficient reversible watermarking based on adaptive prediction-error expansion and pixel selection. Prediction-error expansion (PEE) is an important technique of reversible watermarking which can embed large payloads into digital images with low distortion. In this paper, the PEE technique is further investigated and an efficient reversible watermarking scheme is proposed, by incorporating in PEE two new strategies, namely, adaptive embedding and pixel selection. Unlike conventional PEE which embeds data uniformly, we propose to adaptively embed 1 or 2 bits into expandable pixel according to the local complexity. This avoids expanding pixels with large prediction-errors, and thus, it reduces embedding impact by decreasing the maximum modification to pixel values. Meanwhile, adaptive PEE allows very large payload in a single embedding pass, and it improves the capacity limit of conventional PEE. We also propose to select pixels of smooth area for data embedding and leave rough pixels unchanged. In this way, compared with conventional PEE, a more sharply distributed prediction-error histogram is obtained and a better visual quality of watermarked image is observed. With these improvements, our method outperforms conventional PEE. Its superiority over other state-of-the-art methods is also demonstrated experimentally.
Convex Dwell-Time Characterizations for Uncertain Linear Impulsive Systems New sufficient conditions for the characterization of dwell-times for linear impulsive systems are proposed and shown to coincide with continuous decrease conditions of a certain class of looped-functionals, a recently introduced type of functionals suitable for the analysis of hybrid systems. This approach allows to consider Lyapunov functions that evolve nonmonotonically along the flow of the system in a new way, broadening then the admissible class of systems which may be analyzed. As a byproduct, the particular structure of the obtained conditions makes the method is easily extendable to uncertain systems by exploiting some convexity properties. Several examples illustrate the approach.
The Model Checker SPIN SPIN is an efficient verification system for models of distributed software systems. It has been used to detect design errors in applications ranging from high-level descriptions of distributed algorithms to detailed code for controlling telephone exchanges. This paper gives an overview of the design and structure of the verifier, reviews its theoretical foundation, and gives an overview of significant practical applications.
A taxonomy of virtual worlds usage in education Virtual worlds are an important tool in modern education practices as well as providing socialisation, entertainment and a laboratory for collaborative work. This paper focuses on the uses of virtual worlds for education and synthesises over 100 published academic papers, reports and educational websites from around the world. A taxonomy is then derived from these papers, delineating current theoretical and practical work on virtual world usage, specifically in the field of education. The taxonomy identifies rich veins of current research and practice in associated educational theory and in simulated worlds or environments, yet it also demonstrates the paucity of work in important areas such as evaluation, grading and accessibility.
The Semantics of Statecharts in HOL Statecharts are used to produce operational specifications in the CASE tool STATEMATE. This tool provides some analysis capabilities such as reachability of states, but formal methods offer the potential of linking more powerful requirements analysis with CASE tools. To provide this link, it is necessary to have a rigorous semantics for the specification notation. In this paper we present an operational semantics for Statecharts in quantifier free higher order logic, embedded in the theorem prover HOL.
Formal Semantics for Ward & Mellor''s Transformation Schemas
Trading Networks with Bilateral Contracts. We consider general networks of bilateral contracts that include supply chains. We define a new stability concept, called trail stability, and show that any network of bilateral contracts has a trail-stable outcome whenever agents' preferences satisfy full substitutability. Trail stability is a natural extension of chain stability, but is a stronger solution concept in general contract networks. Trail-stable outcomes are not immune to deviations of arbitrary sets of firms. In fact, we show that outcomes satisfying an even more demanding stability property -- full trail stability -- always exist. We pin down conditions under which trail-stable and fully trail-stable outcomes have a lattice structure. We then completely describe the relationships between all stability concepts. When contracts specify trades and prices, we also show that competitive equilibrium exists in networked markets even in the absence of fully transferrable utility. The competitive equilibrium outcome is trail-stable.
1.211791
0.108784
0.052948
0.036261
0.01827
0.002675
0.001126
0.000001
0
0
0
0
0
0
Unifying Theories of Parallel Programming We are developing a shared-variable refinement calculus in the style of the sequential calculi of Back, Morgan, and Morris. As part of this work, we're studying different theories of shared-variable programming. Using the concepts and notations of Hoare & He's unifying theories of programming (UTP), we give a formal semantics to a programming language that contains sequential composition, conditional statements, while loops, nested parallel composition, and shared variables. We first give a UTP semantics to labelled action systems, and then use this to give the semantics of our programs. Labelled action systems have a unique normal form that allows a simple formalisation and validation of different logics for reasoning about shared-variable programs. In this paper, we demonstrate how this is done for Lamport's Concurrent Hoare Logic.
On Rely-Guarantee Reasoning Many semantic models of rely-guarantee have been proposed in the literature. This paper proposes a new classification of the approaches into two groups based on their treatment of guarantee conditions. To allow a meaningful comparison, it constructs an abstract model for each group in a unified setting. The first model uses a weaker judgement and supports more general rules for atomic commands and disjunction. However, the stronger judgement of the second model permits the elegant separation of the rely from the guarantee due to Hayes et al. and allows refinement-style reasoning. The generalisation to models that use binary relations for postconditions is also investigated. An operational semantics is derived and both models are shown to be sound with respect to execution. All proofs have been checked with Isabelle/HOL and are available online.
UTP Semantics for Shared-State, Concurrent, Context-Sensitive Process Models Process Modelling Language (PML) is a notationfor describing software development and business processes. It takes the form of a shared-state concurrent imperative language describing tasks asactivities that require resources to startand provide resources when they complete. Its syntax covers sequential composition, parallelism, iteration and choice, but without explicit iteration and choice conditions. It is intended to support a range of context-sensitive interpretations, from a rough guide for intended behaviour, to being very prescriptive about the order in which tasks must occur. We are using Unifying Theories of Programming (UTP) to modelthis range of semantic interpretations, with formal links between them, typically of the nature of a refinement. We address a number of challenges that arise when trying to developa compositional semantics for PML and its shared-state concurrent underpinnings, most notably in how UTP observations need to distinguishbetween dynamic state-changes and static context parameters. The formal semantics are intended as the basis for tool support for process analysis, with applicationsin the healthcare domain, covering such areas as healthcare pathwaysand software development and certification processesfor medical device software.
A Refinement Calculus for Shared-Variable Parallel and Distributed Programming Parallel computers have not yet had the expected impact on mainstream computing. Parallelism adds a level of complexity to the programming task that makes it very error-prone. Moreover, a large variety of very dierent parallel architectures exists. Porting an implementation from one machine to another may require substantial changes. This paper addresses some of these problems by developing a formal basis for the design of parallel programs in the form of a renement calculus. The calculus allows the stepwise formal derivation of an abstract, low-level implementation from a trusted, high-level specication. The calculus thus helps structuring and documenting the development process. Portability is increased, because the introduction of a machine-dependent feature can be located in the renement tree. Development eorts above this point in the tree are independent of that feature and are thus reusable. Moreover, the discovery of new, possibly more ecient solutions is facilitated. Last but not least, programs are correct by construction, which obviates the need for dicult debugging. Our programming/specication notation supports fair parallelism, shared-variable and message-passing concurrency, local variables and channels. The calculus rests on a compositional trace semantics that treats shared-variable and message-passing concurrency uniformly. The renement relation combines a context-sensitive notion of trace inclusion and assumption-commitment reasoning to achieve compositionality. The calculus straddles both concurrency paradigms, that is, a shared-variable program can be rened into a distributed, message-passing program and vice versa.
A distributed algorithm for detecting resource deadlocks in distributed systems This paper presents a distributed algorithm to detect deadlocks in distributed data bases. Features of this paper are (1) a formal model of the problem is presented, (2) the correctness of the algorithm is proved, i.e. we show that all true deadlocks will be detected and deadlocks will not be reported falsely, (3) no assumptions are made other than that messages are received correctly and in order and (4) the algorithm is simple.
How to cook a temporal proof system for your pet language An abstract temporal proof system is presented whose program-dependent part has a high-level interface with the programming language actually studied. Given a new language, it is sufficient to deline the interface notions of atomic transitions, justice, and fairness in order to obtain a full temporal proof system for this language. This construction is particularly useful for the analysis of concurrent systems. We illustrate the construction on the shared-variable model and on CSP. The generic proof system is shown to be relatively complete with respect to pure first-order temporal logic.
Programmers use slices when debugging Computer programmers break apart large programs into smaller coherent pieces. Each of these pieces: functions, subroutines, modules, or abstract datatypes, is usually a contiguous piece of program text. The experiment reported here shows that programmers also routinely break programs into one kind of coherent piece which is not coniguous. When debugging unfamiliar programs programmers use program pieces called slices which are sets of statements related by their flow of data. The statements in a slice are not necessarily textually contiguous, but may be scattered through a program.
Machine Learning This exciting addition to the McGraw-Hill Series in Computer Science focuses on the concepts and techniques that contribute to the rapidly changing field of machine learning--including probability and statistics, artificial intelligence, and neural networks--unifying them all in a logical and coherent manner. Machine Learning serves as a useful reference tool for software developers and researchers, as well as an outstanding text for college students.Table of contentsChapter 1. IntroductionChapter 2. Concept Learning and the General-to-Specific OrderingChapter 3. Decision Tree LearningChapter 4. Artificial Neural NetworksChapter 5. Evaluating HypothesesChapter 6. Bayesian LearningChapter 7. Computational Learning TheoryChapter 8. Instance-Based LearningChapter 9. Inductive Logic ProgrammingChapter 10. Analytical LearningChapter 11. Combining Inductive and Analytical LearningChapter 12. Reinforcement Learning.
Viewpoints: principles, problems and a practical approach to requirements engineering The paper includes a survey and discussion of viewpoint&dash;oriented approaches to requirements engineering and a presentation of new work in this area which has been designed with practical application in mind. We describe the benefits of viewpoint&dash;oriented requirements engineering and describe the strengths and weaknesses of a number of viewpoint&dash;oriented methods. We discuss the practical problems of introducing viewpoint&dash;oriented requirements engineering into industrial software engineering practice and why these have prevented the widespread use of existing approaches. We then introduce a new model of viewpoints called Preview. Preview viewpoints are flexible, generic entities which can be used in different ways and in different application domains. We describe the novel characteristics of the Preview viewpoints model and the associated processes of requirements discovery, analysis and negotiation. Finally, we discuss how well this approach addresses some outstanding problems in requirements engineering (RE) and the practical industrial problems of introducing new requirements engineering methods.
Degrees of acyclicity for hypergraphs and relational database schemes Database schemes (winch, intuitively, are collecuons of table skeletons) can be wewed as hypergraphs (A hypergraph Is a generalization of an ordinary undirected graph, such that an edge need not contain exactly two nodes, but can instead contain an arbitrary nonzero number of nodes.) A class of "acychc" database schemes was recently introduced. A number of basic desirable propemes of database schemes have been shown to be equivalent to acyclicity This shows the naturalness of the concept. However, unlike the situation for ordinary, undirected graphs, there are several natural, noneqmvalent notions of acyclicity for hypergraphs (and hence for database schemes). Various desirable properties of database schemes are constdered and it is shown that they fall into several equivalence classes, each completely characterized by the degree of acycliclty of the scheme The results are also of interest from a purely graph-theoretic viewpomt. The original notion of aeyclicity has the countermtmtive property that a subhypergraph of an acychc hypergraph can be cyclic. This strange behavior does not occur for the new degrees of acyelicity that are considered.
Animating TLA Specifications TLA (the Temporal Logic of Actions) is a linear temporal logic for specifying and reasoning about reactive systems. We define a subset of TLA whose formulas are amenable to validation by animation, with the intent to fa- cilitate the communication between domain and solution experts in the design of reactive systems. The Temporal Logic of Actions (TLA) has been proposed by Lamport (21) for the specification and verification of reactive and concurrent sy stems. TLA models describe infinite sequences of states, called behaviors, that corres pond to the execution of the system being specified. System specifications in TLA are usua lly written in a canonical form, which consists of specifying the initial states, the p ossible moves of the system, and supplementary fairness properties. Because such specifications are akin to the de- scriptions of automata and often have a strongly operational flavor, it is tempting to take such a formula and "let it run". In this paper, we define an inte rpreter algorithm for a suitable subset of TLA. The interpreter generates (finite) r uns of the system described by the specification, which can thus be validated by the user. For reasons of complexity, it is impossible to animate an arb itrary first-order TLA specification; even the satisfiability problem for that logi c is -complete. Our restric- tions concern the syntactic form of specifications, which en sure that finite models can be generated incrementally. They do not constrain the domains of system variables or restrict the non-determinism inherent in a specification, w hich is important in the realm of reactive systems. In contrast, model checking techniques allow to exhaustively analyse the (infinite) runs of finite-state systems. It is generally agreed that the development of reactive sys- tems benefits from the use of both animation for the initial mo delling phase, comple- mented by model checking of system abstractions for the verification of crucial system components. The organization of the paper is as follows: in sections 2 and 3 we discuss the overall role of animation for system development, illustra ting its purpose at the hand of a simple example, and discuss executable temporal logics. Section 4 constitutes the main body of this paper; we there define the syntax and semanti cs of an executable This work was partly supported by a grant from DAAD and APAPE under the PROCOPE program.
The Jikes research virtual machine project: building an open-source research community This paper describes the evolution of the Jikes™ Research Virtual Machine project from an IBM internal research project, called Jalapeño, into an open-source project. After summarizing the original goals of the project, we discuss the motivation for releasing it as an open-source project and the activities performed to ensure the success of the project. Throughout, we highlight the unique challenges of developing and maintaining an open-source project designed specifically to support a research community.
Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers.
Cognitive Relaying With Transceiver Hardware Impairments Under Interference Constraints. In this letter, we analyze the performance of cognitive amplify-and-forward multirelay networks with active direct link in the presence of relay transceiver hardware impairments. Considering distortion noises on both interference and main data links, we derive tight closed-form outage probability expressions and their asymptotic behavior for partial relay selection (PRS) and opportunistic relay se...
1.11
0.1
0.1
0.004
0.000033
0.000007
0
0
0
0
0
0
0
0
Dually nondeterministic functions Nondeterminacy is a fundamental notion in computing. We show that it can be described by a general theory that accounts for it in the form in which it occurs in many programming contexts, among them specifications, competing agents, data refinement, abstract interpretation, imperative programming, process algebras, and recursion theory. Underpinning these applications is a theory of nondeterministic functions; we construct such a theory. The theory consists of an algebra with which practitioners can reason about nondeterministic functions, and a denotational model to establish the soundness of the theory. The model is based on the idea of free completely distributive lattices over partially ordered sets. We deduce the important properties of nondeterministic functions.
HOL-Boogie -- An Interactive Prover for the Boogie Program-Verifier Boogieis a program verification condition generator for an imperative core language. It has front-ends for the programming languages C# and C enriched by annotations in first-order logic.Its verification conditions -- constructed via a wpcalculus from these annotations -- are usually transferred to automated theorem provers such as Simplifyor Z3. In this paper, however, we present a proof-environment, HOL-BoogieP, that combines Boogie with the interactive theorem prover Isabelle/HOL. In particular, we present specific techniques combining automated and interactive proof methods for code-verification.We will exploit our proof-environment in two ways: First, we present scenarios to "debug" annotations (in particular: invariants) by interactive proofs. Second, we use our environment also to verify "background theories", i.e. theories for data-types used in annotations as well as memory and machine models underlying the verification method for C.
Dual unbounded nondeterminacy, recursion, and fixpoints In languages with unbounded demonic and angelic nondeterminacy, functions acquire a surprisingly rich set of fixpoints. We show how to construct these fixpoints, and describe which ones are suitable for giving a meaning to recursively defined functions. We present algebraic laws for reasoning about them at the language level, and construct a model to show that the laws are sound. The model employs a new kind of power domain-like construct for accommodating arbitrary nondeterminacy.
Logical Specifications for Functional Programs We present a formal method of functional program development based on step-by-step transformation.
Monotone predicate transformers as up-closed multirelations In the study of semantic models for computations two independent views predominate: relational models and predicate transformer semantics. Recently the traditional relational view of computations as binary relations between states has been generalised to multirelations between states and properties allowing the simultaneous treatment of angelic and demonic nondeterminism. In this paper the two-level nature of multirelations is exploited to provide a factorisation of up-closed multirelations which clarifies exactly how multirelations model nondeterminism. Moreover, monotone predicate transformers are, in the precise sense of duality, up-closed multirelations. As such they are shown to provide a notion of effectivity of a specification for achieving a given postcondition.
Abstracto 84: The next generation Programming languages are not an ideal vehicle for expressing algorithms. This paper sketches how a language Abstracto might be developed for “algorithmic expressions” that may be manipulated by the rules of “algorithmics”, quite similar to the manipulation of mathematical expressions in mathematics. Two examples are given of “abstract” algorithmic expressions that are not executable in the ordinary sense, but may be used in the derivation of programs. It appears that the notion of “refinement” may be replaced by a weaker notion for abstract algorithmic expressions, corresponding also to a weaker notion of “weakest precondition”.
Guarded commands, nondeterminacy and formal derivation of programs So-called “guarded commands” are introduced as a building block for alternative and repetitive constructs that allow nondeterministic program components for which at least the activity evoked, but possibly even the final state, is not necessarily uniquely determined by the initial state. For the formal derivation of programs expressed in terms of these constructs, a calculus will be be shown.
Stepwise Refinement of Action Systems A method for the formal development of provably correct parallel algorithms by stepwise refinement is presented. The entire derivation procedure is carried out in the context of purely sequential programs. The resulting parallel algorithms can be efficiently executed on different architectures. The methodology is illustrated by showing the main derivation steps in a construction of a parallel algorithm for matrix multiplication.
A superimposition control construct for distributed systems A control structure called a superimposition is proposed. The structure contains schematic abstractions of processes called roletypes in its declaration. Each roletype may be bound to processes from a basic distributed algorithm, and the operations of the roletype will then execute interleaved with those of the basic processes, over the same state space. This structure captures a kind of modularity natural for distributed programming, which previously has been treated using a macro-like implantation of code. The elements of a superimposition are identified, a syntax is suggested, correctness criteria are defined, and examples are presented.
Using emoticons to reduce dependency in machine learning techniques for sentiment classification Sentiment Classification seeks to identify a piece of text according to its author's general feeling toward their subject, be it positive or negative. Traditional machine learning techniques have been applied to this problem with reasonable success, but they have been shown to work well only when there is a good match between the training and test data with respect to topic. This paper demonstrates that match with respect to domain and time is also important, and presents preliminary experiments with training data labeled with emoticons, which has the potential of being independent of domain, topic and time.
Requirements Specification for Process-Control Systems The paper describes an approach to writing requirements specifications for process-control systems, a specification language that supports this approach, and an example application of the approach and the language on an industrial aircraft collision avoidance system (TCAS II). The example specification demonstrates: the practicality of writing a formal requirements specification for a complex, process-control system; and the feasibility of building a formal model of a system using a specification language that is readable and reviewable by application experts who are not computer scientists or mathematicians. Some lessons learned in the process of this work, which are applicable both to forward and reverse engineering, are also presented.
Graph rewrite systems for program optimization Graph rewrite systems can be used to specify and generate program optimizations. For termination of the systems several rule-based criteria are developed, defining exhaustive graph rewrite systems. For nondeterministic systems stratification is introduced which automatically selects single normal forms. To illustrate how far the methodology reaches, parts of the lazy code motion optimization are specified. The resulting graph rewrite system classes can be evaluated by a uniform algorithm, which forms the basis for the optimizer generator OPTIMIX. With this tool several optimizer components have been generated, and some numbers on their speed are presented.
On Teaching Visual Formalisms A graduate course on visual formalisms for reactive systems emphasized using such languages for not only specification and requirements but also (and predominantly) actual execution. The course presented two programming approaches: an intra-object approach using statecharts and an interobject approach using live sequence charts. Using each approach, students built a small system of their choice and then combined the two systems.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.115378
0.061144
0.043804
0.02757
0.008533
0.003452
0.000168
0.000009
0
0
0
0
0
0
The Semantics of Semantic Annotation Semantic metadata will playa significant role in the provision of the Semantic Web. Agents will need metadata that describes the content of resources in order to perform operations, such as retrieval, over those resources. In addition, if rich semantic metadata is supplied, those agents can then employ reasoning over the metadata, enhancing their processing power. Key to this approach is the provision of annotations, both through automatic and human means. The semantics of these annotations, however, in terms of the mechanisms through which they are interpreted and presented to the user, are sometimes unclear. In this paper, we identifya number of candidate interpretations of annotation, and discuss the impact these interpretations mayha ve on Semantic Web applications.
Verification and validation of knowledge-based systems Knowledge-based systems (KBSs) are being used in many applications areas where their failures can be costly because of losses in services, property or even life. To ensure their reliability and dependability, it is therefore important that these systems are verified and validated before they are deployed. This paper provides perspectives on issues and problems that impact the verification and validation (V&V) of KBSs. Some of the reasons why V&V of KBSs is difficult are presented. The paper also provides an overview of different techniques and tools that have been developed for performing V&V activities. Finally, some of the research issues that are relevant for future work in this field are discussed
Knowledge management and the dynamic nature of knowledge Knowledge management (KM) or knowledge sharing in organizations is based on an understanding of knowledge creation and knowledge transfer. In implementation, KM is an effort to benefit from the knowledge that resides in an organization by using it to achieve the organization's mission. The transfer of tacit or implicit knowledge to explicit and accessible formats, the goal of many KM projects, is challenging, controversial, and endowed with ongoing management issues. This article argues that effective knowledge management in many disciplinary contexts must be based on understanding the dynamic nature of knowledge itself. The article critiques some current thinking in the KM literature and concludes with a view towards knowledge management programs built around knowledge as a dynamic process.
Guest Editor's Introduction: Knowledge-Management Systems-Converting and Connecting
Fuzzy semantic analysis and formal specification of conceptual knowledge Conceptual knowledge can be specified using one of the methods of formal specification of the semantics of a computer program: axiomatic semantics, denotational semantics, or operational semantics. For example, axiomatic semantics can be used to specify the conceptual knowledge of a medical doctor in an expert system for medical diagnosis. The problem is, however, that the knowledge of the expert is not always crisp and well defined. In such cases, a mean for specifying fuzzy conceptual knowledge is required. This paper proposes a method for the specifications of fuzzy conceptual knowledge. To this end, the concepts of fuzzy axiomatic semantics and fuzzy denotational semantics are developed. Fuzzy semantics is a generalization of classical semantics.
The CG Formalism as an Ontolingua for Web-Oriented Representation Languages The semantic Web entails the standardization of representation mechanisms so that the knowledge contained in a Web document can be retrieved and processed on a semantic level. RDF seems to be the emerging encoding scheme for that purpose. However, there are many different sorts of documents on theWeb that do not use RDF as their primary coding scheme. It is expected that many one-to-one mappings between pairs of document representation formalisms will eventually arise. This would create a situation where a young standard such as RDF would generate update problems for all these mappings as it evolves, which is inevitable. Rather, we advocate the use of a common Ontolingua for all these encoding formalisms. Though there may be many knowledge representation formalisms suited for that task, we advocate the use of the conceptual graph formalism.
OIL: An Ontology Infrastructure for the Semantic Web Currently, computers are changing from single isolated devices to entry points into a worldwide network of information exchange and business transactions. Support in the exchange of data, information, and knowledge is becoming the key issue in computer technology today. Ontologies provide a shared and common understanding of a domain that can be communicated between people and across application systems. Ontologies will play a major role in supporting information exchange processes in various areas. A prerequisite for such a role is the development of a joint standard for specifying and exchanging ontologies well integrated with existing Web standards. This article deals with precisely this necessity. The authors present OIL, a proposal for such a standard enabling the semantic Web. It is based on existing proposals such as OKBC, XOL, and RDFS and enriches them with necessary features for expressing rich ontologies. The article presents the motivation, underlying rationale, modeling primitives, syntax, semantics, tool environment, and applications of OIL.
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
Principles of good software specification and their implications for specification languages Careful consideration of the primary uses of software specifications leads directly to three criteria for judging specifications, which can then be used to develop eight design principles for "good" specifications. These principles, in turn, result in eighteen implications for specification languages that strongly constrain the set of adequate specification languages and identify the need for several novel capabilities such as historical and future references, elimination of variables, and result specification.
Knowledge Visualization from Conceptual Structures This paper addresses the problem of automatically generating displays from conceptual graphs for visualization of the knowledge contained in them. Automatic display generation is important in validating the graphs and for communicating the knowledge they contain. Displays may be classified as literal, schematic, or pictorial, and also as static versus dynamic. At this time prototype software has been developed to generate static schematic displays of graphs representing knowledge of digital systems. The prototype software generates displays in two steps, by first joining basis displays associated with basis graphs from which the graph to be displayed is synthesized, and then assigning screen coordinates to the display elements. Other strategies for mapping conceptual graphs to schematic displays are also discussed. Keywords Visualization, Representation Mapping, Conceptual Graphs, Schematic Diagrams, Pictures
Algebraic tools for the performance evaluation of discrete event systems In this paper, it is shown that a certain class of Petri nets called event graphs can be represented as linear "time-invariant" flnite-dimensional sys- tems using some particular algebras. This sets the ground on which a theory of these systems can be developped in a manner which is very analogous to that of conventional linear system theory. Part 2 of the paper is devoted to showing some preliminary basic developments in that direction. Indeed, there are several ways in which one can consider event graphs as linear sys- tems: these ways correspond to approaches in the time domain, in the event domain and in a two-dimensional domain. In each of these approaches, a difierent algebra has to be used for models to remain linear. However, the common feature of these algebras is that they all fall into the axiomatic deflnition of "dioids". Therefore, Part 1 of the paper is devoted to a unifled presentation of basic algebraic results on dioids.
Statechartable Petri nets. Petri nets and statecharts can model concurrent systems in a succinct way. While translations from statecharts to Petri nets exist, a well-defined translation from Petri nets to statecharts is lacking. Such a translation should map an input net to a corresponding statechart, having a structure and behaviour similar to that of the input net. Since statecharts can only model a restricted form of concurrency, not every Petri net has a corresponding statechart. We identify a class of Petri nets, called statechartable nets, that can be translated to corresponding statecharts. Statechartable Petri nets are structurally defined using the novel notion of an area. We also define a structural translation that maps each statechartable Petri net to a corresponding statechart. The translation is proven sound and complete for statechartable Petri nets.
On ternary square-free circular words Circular words are cyclically ordered finite sequences of letters. We give a computer-free proof of the following result by Currie: square-free circular words over the ternary alphabet exist for all lengths l except for 5, 7, 9, 10, 14, and 17. Our proof reveals an interesting connection between ternary square-free circular words and closed walks in the K(3,3) graph. In addition, our proof implies an exponential lower bound on the number of such circular words of length l and allows one to list all lengths l for which such a circular word is unique up to isomorphism.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.201143
0.201143
0.201143
0.201143
0.201143
0.100571
0.033602
0.000015
0
0
0
0
0
0
Portable runtime support for graph-oriented parallel and distributed programming In this paper, we describe the design and implementation of a portable run-time system for GOP, a graph-oriented programming framework aiming at providing high-bevel abstractions for configuring and programming cooperative parallel processes. The runtime system provides an interface with a library of programming primitives to the low-level facilities required to support graph-oriented communications and synchronization. The implementation is on top of the Parallel Virtual Machine (PVM) in a local area network of Sun workstations. Issues related to the implementation of graph operations in a distributed environment are discussed. Performance of the runtime system is evaluated by estimating the overheads associated with using GOP primitives as opposed to PVM
Distributed data structures: a complexity-oriented view The problem of designing, implementing and operating a data structure in a distributed system is studied from a complexity oriented point of view. Various relevant issues are addressed via the development of an example structure. The structure evolves through a sequence of steps, each oriented towards attacking a different aspect of the problem. The paper concentrates on deterministic structures featuring low memory requirements, memory balance and efficient access protocols. Among the issues treated are centerless organizations of data structures, background maintenance of memory balancing, employing redundancy for increasing search efficiency and concurrent accesses to distributed structures.
Paradigms for process interaction in distributed programs Distributed computations are concurrent programs in which processes communicate by message passing. Such programs typically execute on network architectures such as networks of workstations or distributed memory parallel machines (i.e., multicomputers such as hypercubes). Several paradigms—examples or models—for process interaction in distributed computations are described. These include networks of filters, clients, and servers, heartbeat algorithms, probe/echo algorithms, broadcast algorithms, token-passing algorithms, decentralized servers, and bags of tasks. These paradigms are appliable to numerous practical problems. They are illustrated by solving problems, including parallel sorting, file servers, computing the topology of a network, distributed termination detection, replicated databases, and parallel adaptive quadrature. Solutions to all problems are derived in a step-wise fashion from a general specification of the problem to a concrete solution. The derivations illustrate techniques for developing distributed algorithms.
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:[email protected]), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled...
2009 Data Compression Conference (DCC 2009), 16-18 March 2009, Snowbird, UT, USA
Voice as sound: using non-verbal voice input for interactive control We describe the use of non-verbal features in voice for direct control of interactive applications. Traditional speech recognition interfaces are based on an indirect, conversational model. First the user gives a direction and then the system performs certain operation. Our goal is to achieve more direct, immediate interaction like using a button or joystick by using lower-level features of voice such as pitch and volume. We are developing several prototype interaction techniques based on this idea, such as "control by continuous voice", "rate-based parameter control by pitch," and "discrete parameter control by tonguing." We have implemented several prototype systems, and they suggest that voice-as-sound techniques can enhance traditional voice recognition approach.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
Characterizing plans as a set of constraints—the model—a framework for comparative analysis This paper presents an approach to representing and manipulating plans based on a model of plans as a set of constraints. The <I-N-OVA> model 1 is used to characterise the plan representation used within O-Plan and to relate this work to emerging formal analyses of plans and planning. This synergy of practical and formal approaches can stretch the formal methods to cover realistic plan representations as needed for real problem solving, and can improve the analysis that is possible for production planning systems.<I-N-OVA> is intended to act as a bridge to improve dialogue between a number of communities working on formal planning theories, practical planning systems and systems engineering process management methodologies. It is intended to support new work on automatic manipulation of plans, human communication about plans, principled and reliable acquisition of plan information, and formal reasoning about plans.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.05
0.028571
0
0
0
0
0
0
0
0
0
0
0
Non-Fragile Robust Strictly Dissipative Control of Disturbed T–S Fuzzy Systems with Input Saturation In this paper, a non-fragile controller for uncertain disturbed Takagi–Sugeno (T–S) fuzzy systems is proposed based on the non-parallel distributed compensation (non-PDC) concept with strictly (Q, G, R) − α-dissipative criterion. To investigate a general T–S fuzzy model with a practical viewpoint, it is supposed that the T–S model consists of actuator saturation and external disturbances. Moreover, in order to handle the uncertainties of the practical components which realize the gains of the control signal, we consider bounded uncertainties in the controller gains which lead to a non-fragile T–S fuzzy controller. By employing a multiple Lyapunov function for the controller synthesis, less conservative non-PDC design conditions are derived in contrast with the common quadratic Lyapunov function-based ones. Sufficient conditions for the existence of such a controller are derived in terms of linear matrix inequalities (LMIs). Furthermore, to conquer the problem of specification of the upper bounds for the derivative of the grades of T–S fuzzy membership functions, we propose a novel method for obtaining the corresponding LMIs. The success of the developed technique is demonstrated through a numerical example α.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Intuitionistic Refinement Calculus Refinement calculi are program logics which formalize the “top-down” methodology of software development promoted by Dijkstra and Wirth in the early days of structured programming. I present here the shallow embedding of a refinement calculus into constructive type theory. This embedding involves monad transformers and the computational reflexion of weakest-preconditions, using a continuation passing style. It should allow to reason about many programs combining non-functional features (state, exceptions, etc) with purely functional ones (higher-order functions, structural recursion, etc).
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Pigs from sausages? Reengineering from assembler to C via FermaT transformations Software reengineering has been described as being "about as easy as reconstructing a pig from a sausage" (Comput. Canada 18 (1992) 35). But the development of program transformation theory, as embodied in the FermaT transformation system, has made this miraculous feat into a practical possibility. This paper describes the theory behind the FermaT system and describes a recent migration project in which over 544,000 lines of assembler "sausage" (part of a large embedded system) were transformed into efficient and maintainable structured C code.
MetaWSL and meta-transformations in the FermaT transformation system A program transformation is an operation which can be applied to any program (satisfying the transformations applicability conditions) and returns a semantically equivalent program. In the FermaT transformation system program transformations are carried out in a wide spectrum language, called WSL, and the transformations themselves are written in an extension of WSL called MetaWSL which was specifically designed to be a domain-specific language for writing program transformations. As a result, FermaT is capable of transforming its own source code via meta-transformations. This paper introduces MetaWSL and describes some applications of meta-transformations in the FermaT system.
Program Analysis By Formal Transformation This paper treats Knuth and Szwarcfiter's topological sorting program as a case study for the analysis of a program by formal transformations. This algorithm was selected for the case study because it is a particularly challenging program for any reverse engineering method, Besides a complex control how, the program uses arrays to represent various linked lists and sets, which are manipulated in various 'ingenious' ways so as to squeeze the last ounce of performance from the algorithm. Our aim is to manipulate the program, using semantics-preserving operations, to produce an abstract specification, The transformations are carried out in the WSL language, a 'wide spectrum language' which includes both low-level program operations and high level specifications, and which has been specifically designed to be easy to transform.
Recursion Removal/Introduction by Formal Transformation: An Aid to Program Development and Program Comprehension The transformation of a recursive program to an iterative equivalent is a fundamental operation in computer science. In the reverse direction, the task of reverse engineering (analysing a given program in order to determine its specification) can be greatly ameliorated if the program can be re-expressed in a suitable recursive form. However, the existing recursion removal transformations, such as ...
Refinement concepts formalised in higher order logic A theory of commands with weakest precondition semantics is formalised using the HOL proof assistant system. The concept of refinement between commands is formalised, a number of refinement rules are proved and it is shown how the formalisation can be used for proving refinements of actual program texts correct.
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
Algebraic tools for the performance evaluation of discrete event systems In this paper, it is shown that a certain class of Petri nets called event graphs can be represented as linear "time-invariant" flnite-dimensional sys- tems using some particular algebras. This sets the ground on which a theory of these systems can be developped in a manner which is very analogous to that of conventional linear system theory. Part 2 of the paper is devoted to showing some preliminary basic developments in that direction. Indeed, there are several ways in which one can consider event graphs as linear sys- tems: these ways correspond to approaches in the time domain, in the event domain and in a two-dimensional domain. In each of these approaches, a difierent algebra has to be used for models to remain linear. However, the common feature of these algebras is that they all fall into the axiomatic deflnition of "dioids". Therefore, Part 1 of the paper is devoted to a unifled presentation of basic algebraic results on dioids.
Formal methods: state of the art and future directions ing with credit is permitted. To copy otherwise, to republish, to post onservers, to redistribute to lists, or to use any component of this work in other works, requires priorspecific permission and/or a fee. Permissions may be requested from Publications Dept, ACMInc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or [email protected] \Delta E.M. Clarke and J.M. WingAbout Programs---Mechanical verification, Specification techniques; F.4.1 [Mathematical Logicand...
Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without...
Workflow Modeling A discussion of workflow models and process description languages is presented. The relationshipbetween data, function and coordination aspects of the process is discussed, and a claim is made thatmore than one model view (or representation) is needed in order to grasp the complexity of processmodeling.The basis of a new model is proposed, showing that more expressive models can be built by supportingasynchronous events and batch activities, matched by powerfull run-time support.1...
The Conical Methodology and the evolution of simulation model development Originating with ideas generated in the mid-1970s, the Conical Methodology (CM) is the oldest procedural approach to simulation model development. This evolutionary overview describes the principles underlying the CM, the environment structured according to these principles, and the capabilities for large complex simulation modeling tasks not provided in textbook descriptions. The CM is an object-oriented, hierarchical specification language that iteratively prescribes object attributes in a definitional phase that is topdown, followed by a specification phase that is bottom-up. The intent is to develop successive model representations at various levels of abstraction that can be diagnosed for correctness, completeness, consistency, and other characteristics prior to implementation as an executable program. Related or competitive approaches, throughout the evolutionary period are categorized as emanating from: artificial intelligence, mathematical programming, software engineering, conceptual modeling, systems theory, logic-based theory, or graph theory. Work in each category is briefly described.
Visual Query Systems for Databases: A Survey Visual query systems (VQSs) are query systems for databases that use visual representations to depict the domain of interest and express related requests. VQSs can be seen as an evolution of query languages adopted into database management systems; they are designed to improve the effectiveness of the human–computer communication. Thus, their most important features are those that determine the nature of the human–computer dialogue. In order to survey and compare existing VQSs used for querying traditional databases, we first introduce a classification based on such features, namely the adopted visual representations and the interaction strategies. We then identify several user types and match the VQS classes against them, in order to understand which kind of system may be suitable for each kind of user. We also report usability experiments which support our claims. Finally, some of the most important open problems in the VQS area are described.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.2
0.066667
0.05
0.008696
0
0
0
0
0
0
0
0
0
A New Filter Design Method for a Class of Fuzzy Systems With Time Delays In this article, the problem of filtering is studied for a class of nonlinear systems subject to time delays. The dynamics of nonlinear systems are characterized by Takagi–Sugeno (T–S) affine-fuzzy models. First, an extended bounded real lemma is established. In the process of analysis, a novel membership-dependent Lyapunov–Krasovskii functional is constructed, contributing to reducing the conserv...
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Context-based adaptive zigzag scanning for image coding.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Three-dimensional SPIHT coding of volume images with random access and resolution scalability End users of large volume image datasets are often interested only in certain features that can be identified as quickly as possible. For hyperspectral data, these features could reside only in certain ranges of spectral bands and certain spatial areas of the target. The same holds true for volume medical images for a certain volume region of the subject's anatomy. High spatial resolution may be the ultimate requirement, but in many cases a lower resolution would suffice, especially when rapid acquisition and browsing are essential. This paper presents a major extension of the 3D-SPIHT (set partitioning in hierarchical trees) image compression algorithm that enables random access decoding of any specified region of the image volume at a given spatial resolution and given bit rate from a single codestream. Final spatial and spectral (or axial) resolutions are chosen independently. Because the image wavelet transform is encoded in tree blocks and the bit rates of these tree blocks are minimized through a rate-distortion optimization procedure, the various resolutions and qualities of the images can be extracted while reading a minimum amount of bits from the coded data. The attributes and efficiency of this 3D-SPIHT extension are demonstrated for several medical and hyperspectral images in comparison to the JPEG2000 Multicomponent algorithm.
Quality evaluation of progressive lossy-to-lossless remote-sensing image coding Progressive lossy-to-lossless methods for hyper-spectral image coding are becoming common in remote-sensing. However, as remote-sensing imagery is sometimes fed directly into an automated process, there are several alternative distortion measures directed to quantify the image quality with regard to how this process will perform. In this scenario, we investigate the quality evolution in the lossy regime of progressive lossy-to-lossless and perform a detailed evaluation.
Extending the CCSDS Recommendation for Image Data Compression for Remote Sensing Scenarios This paper presents prominent extensions that have been proposed for the Consultative Committee for Space Data Systems Recommendation for Image Data Compression (CCSDS-122-B-1). Thanks to the proposed extensions, the Recommendation gains several important featured advantages: It allows any number of spatial wavelet decomposition levels; it provides scalability by quality, position, resolution, and...
Quality criteria benchmark for hyperspectral imagery Hyperspectral data appear to be of a growing interest over the past few years. However, applications for hyperspectral data are still in their infancy as handling the significant size of the data presents a challenge for the user community. Efficient compression techniques are required, and lossy compression, specifically, will have a role to play, provided its impact on remote sensing application...
Progressive 3-D Coding of Hyperspectral Images Based on JPEG 2000 In this letter we propose a new technique for progressive coding of hyperspectral data. Specifically, we employ a hybrid three-dimensional wavelet transform for spectral and spatial decorrelation in the framework of Part 2 of the JPEG 2000 standard. Both onboard and on-the-ground compression are addressed. The resulting technique is compliant with the JPEG 2000 family of standards and provides com...
Simulation of hepatological models: a study in visual interactive exploration of scientific problems In many different fields of science and technology, visual expressions formed by diagrams, sketches, plots and even images are traditionally used to communicate not only data but also procedures. When these visual expressions are systematically used within a scientific community, bi-dimensional notations often develop which allow the construction of complex messages from sets of primitive icons. This paper discusses how these notations can be translated into visual languages and organized into an interactive environment designed to improve the user's ability to explore scientific problems. To facilitate this translation, the use of Conditional Attributed Rewriting Systems has been extended to visual language definition. The case of a visual language in the programming of a simulation of populations of hepatic cells is studied. A discussion is given of how such a visual language allows the construction of programs through the combination of graphical symbols which are familiar to the physician or which schematize shapes familiar to him in that they resemble structures the observes in real experiments. It is also shown how such a visual approach allows the user to focus on the solution of his problems, avoiding any request for unnecessary precision and most requests for house-keeping data during the interaction.
Formal verification for fault-tolerant architectures: prolegomena to the design of PVS PVS is the most recent in a series of verification systems developed at SRI. Its design was strongly influenced, and later refined, by our experiences in developing formal specifications and mechanically checked verifications for the fault-tolerant architecture, algorithms, and implementations of a model 驴reliable computing platform驴 (RCP) for life-critical digital flight-control applications, and by a collaborative project to formally verify the design of a commercial avionics processor called AAMP5. Several of the formal specifications and verifications performed in support of RCP and AAMP5 are individually of considerable complexity and difficulty. But in order to contribute to the overall goal, it has often been necessary to modify completed verifications to accommodate changed assumptions or requirements, and people other than the original developer have often needed to understand, review, build on, modify, or extract part of an intricate verification. In this paper, we outline the verifications performed, present the lessons learned, and describe some of the design decisions taken in PVS to better support these large, difficult, iterative, and collaborative verifications.
Formal methods: state of the art and future directions ing with credit is permitted. To copy otherwise, to republish, to post onservers, to redistribute to lists, or to use any component of this work in other works, requires priorspecific permission and/or a fee. Permissions may be requested from Publications Dept, ACMInc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or [email protected] \Delta E.M. Clarke and J.M. WingAbout Programs---Mechanical verification, Specification techniques; F.4.1 [Mathematical Logicand...
Performance evaluation in content-based image retrieval: overview and proposals Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as defining a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented.
Workflow Modeling A discussion of workflow models and process description languages is presented. The relationshipbetween data, function and coordination aspects of the process is discussed, and a claim is made thatmore than one model view (or representation) is needed in order to grasp the complexity of processmodeling.The basis of a new model is proposed, showing that more expressive models can be built by supportingasynchronous events and batch activities, matched by powerfull run-time support.1...
Better knowledge management through knowledge engineering In recent years the term knowledge management has been used to describe the efforts of organizations to capture, store, and deploy knowledge. Most current knowledge management activities rely on database and Web technology; currently, few organizations have a systematic process for capturing knowledge, as distinct from data. The authors present a case study where knowledge engineering practices support knowledge management by a drilling optimization group in a large service company. The case study illustrates three facets of the knowledge management task: First, knowledge is captured by a knowledge acquisition process that uses a conceptual model of aspects of the company's business domain to guide the capture of cases. Second, knowledge is stored using a knowledge representation language to codify the structured knowledge in a number of knowledge bases, which together constitute a knowledge repository. Third, knowledge is deployed by running the knowledge bases in a knowledge server, accessible by on the company intranet.
Visual Query Systems for Databases: A Survey Visual query systems (VQSs) are query systems for databases that use visual representations to depict the domain of interest and express related requests. VQSs can be seen as an evolution of query languages adopted into database management systems; they are designed to improve the effectiveness of the human–computer communication. Thus, their most important features are those that determine the nature of the human–computer dialogue. In order to survey and compare existing VQSs used for querying traditional databases, we first introduce a classification based on such features, namely the adopted visual representations and the interaction strategies. We then identify several user types and match the VQS classes against them, in order to understand which kind of system may be suitable for each kind of user. We also report usability experiments which support our claims. Finally, some of the most important open problems in the VQS area are described.
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.1
0.02
0.009524
0.002597
0
0
0
0
0
0
0
0
0
NL-OOPS: from natural language to object oriented requirements using the natural language processing system LOLITA This paper describes NL-OOPS, a CASE tool that supports requirements analysis by generating object oriented models from natural language requirements documents. The full natural language analysis is obtained using as a core system the Natural Language Processing System LOLITA. The object oriented analysis module implements an algorithm for the extraction of the objects and their associations for use in creating object models.
A systematic review of goal-oriented requirements management frameworks for business process compliance.
Requirements Classification and Reuse: Crossing Domain Boundaries A serious problem in the classification of software project artefacts for reuse is the natural partitioning of classification terms into many separate domains of discourse. This problem is particularly pronounced when dealing with requirements artefacts that need to be matched with design components in the refinement process. in such a case, requirements can be described with terms drawn from a problem domain (e.g. games), whereas designs with the use of terms characteristic for the solution domain (e.g. implementation). The two domains have not only distinct terminology, but also different semantics and use of their artefacts. This paper describes a method of cross-domain classification of requirements texts with a view to facilitate their reuse and their refinement into reusable design components.
Automating the Extraction of Rights and Obligations for Regulatory Compliance Government regulations are increasingly affecting the security, privacy and governance of information systems in the United States, Europe and elsewhere. Consequently, companies and software developers are required to ensure that their software systems comply with relevant regulations, either through design or re-engineering. We previously proposed a methodology for extracting stakeholder requirements, called rights and obligations, from regulations. In this paper, we examine the challenges to developing tool support for this methodology using the Cerno framework for textual semantic annotation. We present the results from two empirical evaluations of a tool called "Gaius T." that is implemented using the Cerno framework and that extracts a conceptual model from regulatory texts. The evaluation, carried out on the U.S. HIPAA Privacy Rule and the Italian accessibility law, measures the quality of the produced models and the tool's effectiveness in reducing the human effort to derive requirements from regulations.
A hybrid knowledge representation as a basis of requirement specification and specification analysis A formal requirement specification language, the frame-and-rule oriented requirement specification language FRORL, developed to facilitate the specification, analysis, and development of a software system is presented. The surface syntax of FRORL is based on the concepts of frames and production rules that may bear hierarchical relationships to each other, relying on multiple inheritance. To provide thorough semantic foundations, FRORL is based on a nonmonotonic variant of Horn-clause logic. Using the machinery of Horn-clause logic, various properties of a FRORL specification can be analyzed. Among the external properties of FRORL are formality, object-orientedness, and a wide spectrum of life cycle phases. Intrinsic properties are modularity, provision for incremental development, inheritance, refinement, reusability, prototyping, and executability. A software development environment based on FRORL has been implemented using the C language on a Sun workstation
Research Directions in Requirements Engineering In this paper, we review current requirements engineering (RE) research and identify future research directions suggested by emerging software needs. First, we overview the state of the art in RE research. The research is considered with respect to technologies developed to address specific requirements tasks, such as elicitation, modeling, and analysis. Such a review enables us to identify mature areas of research, as well as areas that warrant further investigation. Next, we review several strategies for performing and extending RE research results, to help delineate the scope of future research directions. Finally, we highlight what we consider to be the "hot" current and future research topics, which aim to address RE needs for emerging systems of the future.
Distributed Intelligent Agents In Retsina, the authors have developed a distributed collection of software agents that cooperate asynchronously to perform goal-directed information retrieval and integration for supporting a variety of decision-making tasks. Examples for everyday organizational decision making and financial portfolio management demonstrate its effectiveness.
Rethinking the Concept of User Involvement. Within the field of information systems, user involvement generally refers to participation in the systems development process by potential users of their representatives and is measured as a set of behaviors or activities that such individuals perform. This article argues for a separation of the constructs of user participation (a set of behaviors or activities performed by users in the system development process) and user involvement (a subjective psychological state reflecting the importance and personal relevance of a system to the user). Such a distinction is not only more consistent with conceptualizations of involvement found in other disciplines, but it also leads to a number of new and interesting hypotheses. These hypotheses promise a richer theoretical network that describes the role and importance of participation and involvement in the implementation process.
Systematic Incremental Validation of Reactive Systems via Sound Scenario Generalization Validating the specification of a reactive system, such as a telephone switching system, traffic controller, or automated network service, is difficult, primarily because it is extremely hard even tostate a set of complete and correct requirements, let alone toprove that a specification satisfies them. In the ISAT project[10], end-user requirements are stated as concrete behavior scenarios, and a multi-functional apprentice system aids the human developer in acquiring and maintaining a specification consistent with the scenarios. ISAT's Validation Assistant (isat-va) embodies a novel, systematic, and incremental approach to validation based on the novel technique ofsound scenario generalization, which automatically states and proves validation lemmas. This technique enablesisat-va to organize the validity proof around a novel knowledge structure, thelibrary of generalized fragments, and provides automated progress tracking and semi-automated help in increasing proof coverage. The approach combines the advantages of software testing and automated theorem proving of formal requirements, avoiding most of their shortcomings, while providing unique advantages of its own.
No Silver Bullet Essence and Accidents of Software Engineering First Page of the Article
Workflow Modeling A discussion of workflow models and process description languages is presented. The relationshipbetween data, function and coordination aspects of the process is discussed, and a claim is made thatmore than one model view (or representation) is needed in order to grasp the complexity of processmodeling.The basis of a new model is proposed, showing that more expressive models can be built by supportingasynchronous events and batch activities, matched by powerfull run-time support.1...
A research typology for object-oriented analysis and design This article evaluates current research on object-oriented analysis and design (OOAD). Critical components in OOAD are identified and various OOAD techniques (i.e., processes or methods) and representations are compared based on these components. Strong and weak areas in OOAD are identified and areas for future research are discussed in this article.
Notes on Nonrepetitive Graph Colouring. A vertex colouring of a graph is nonrepetitive on paths if there is no path upsilon(1), upsilon(2),...., upsilon(2t) such that upsilon(i) and upsilon(t+i) receive the same colour for all i = 1, 2,..., t. We determine the maximum density of a graph that admits a k-colouring that is nonrepetitive on paths. We prove that every graph has a subdivision that admits a 4-colouring that is nonrepetitive on paths. The best previous bound was 5. We also study colourings that are nonrepetitive on walks, and provide a conjecture that would imply that every graph with maximum degree Delta has a f (Delta)-colouring that is nonrepetitive on walks. We prove that every graph with treewidth k and maximum degree Delta has a O(k Delta)-colouring that is nonrepetitive on paths, and a O(k Delta(3))-colouring that is nonrepetitive on walks.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.061319
0.04
0.04
0.022857
0.011528
0.004444
0.000244
0.000128
0.000061
0.000013
0
0
0
0
Quantized control of event-triggered networked systems with time-varying delays The paper is a study of quantized control for stochastic Markov jump systems with interval time-varying delays and bounded system noise under event-triggered mechanism. A new scheme of Lyapunov–Krasovskii functional which contains the quadratic terms and integral terms is presented. Then quadratic convex technology, the theory of stochastic switching system, and logarithmic quantizer are applied to this paper. The design of quantized controller is obtained with those methodologies. Different from previous results, our derivation applies the idea of second-order convex combination. The conservatism of stability criteria for systems is reduced by using this method. A numerical example under different conditions is given to demonstrate the effectiveness and validity of the new design techniques.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Adapting an Object-Oriented Development Method In their first use of object-oriented design techniques, the authors found the Colbert object-oriented software development (OOSD) method helpful, after they had learned to think in terms of objects. The simulation tool they built is easy to maintain and its design can be reused.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Scientific Machine Learning Through Physics–Informed Neural Networks: Where we are and What’s Next Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, integral-differential equations, and stochastic PDEs. This novel methodology has arisen as a multi-task learning framework in which a NN must fit observed data while reducing a PDE residual. This article provides a comprehensive review of the literature on PINNs: while the primary goal of the study was to characterize these networks and their related advantages and disadvantages. The review also attempts to incorporate publications on a broader range of collocation-based physics informed neural networks, which stars form the vanilla PINN, as well as many other variants, such as physics-constrained neural networks (PCNN), variational hp-VPINN, and conservative PINN (CPINN). The study indicates that most research has focused on customizing the PINN through different activation functions, gradient optimization techniques, neural network structures, and loss function structures. Despite the wide range of applications for which PINNs have been used, by demonstrating their ability to be more feasible in some contexts than classical numerical techniques like Finite Element Method (FEM), advancements are still possible, most notably theoretical issues that remain unresolved.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Structured analysis using hierarchical predicate transition nets In previous work, a methodology for constructing hierarchical and structured high-level Petri net specifications has been developed. The authors further explore and refine the methodology for using hierarchical high-level Petri nets in systems analysis. The approach has adapted the results from the data flow diagram method and its application to modern systems analysis. The major steps and the associated techniques of the approach are presented and demonstrated through a library system
A Formal Definition of Hierarchical Predicate Transition Nets Hierarchical predicate transition nets have recently been introduced as a visual formalism for specifying complex reactive systems. They extend predicate transition nets with hierarchical structures so that large systems can be specified and understood stepwisely, and thus are more suitable for real-world applications. In this paper, we provide a formal syntax and an algebraic semantics for hierarchical predicate transition nets, which establish the theory of hierarchical predicate transition nets for precise specification and formal reasoning.
Petri nets in software engineering The central issue of this contribution is a methodology for the use of nets in practical systems design. We show how nets of channels and agencies allow for a continuous and systematic transition from informal and unprecise to precise and formal specifications. This development methodology leads to the representation of dynamic systems behaviour (using Pr/T-Nets) which is apt to rapid prototyping and formal correctness proofs.
On statecharts with overlapping The problem of extending the language of statecharts to include overlapping states is considered. The need for such an extension is motivated and the subtlety of the problem is illustrated by exhibiting the shortcomings of naive approaches. The syntax and formal semantics of our extension are then presented, showing in the process that the definitions for conventional statecharts constitute a special case. Our definitions are rather complex, a fact that we feel points to the inherent difficulty of such an extension. We thus prefer to leave open the question of whether or not it should be adopted in practice.
Process-translatable Petri nets for the rapid prototyping of process control systems This paper presents a methodology for the rapid prototyping of process control systems, which is based on an original extension to classical Petri nets. The proposed nets, called PROT nets, provide a suitable framework to support the following activities: building an operational specification model; evaluation, simulation, and validation of the model; automatic translation into program structures. In particular, PROT nets are shown to be translatable into Ada® program structures concerning concurrent processes and their synchronizations. The paper illustrates this translation in detail using, as a working example, the problem of tool handling in a flexible manufacturing system.
A Graphical Query Language Based on an Extended E-R Model
On the Formal Semantics of Statecharts (Extended Abstract)
Flow Sketch Methodology: A Practical Requirements Definition Technique Based on Data Flow Concept This paper discusses a new simple methodology for defining software system requirements. We have developed a practical approach which we call FS (Flow Sketch) methodology. This methodology, based on the data flow concept, has been developed to provide the precise means of user's requirements. Actually, the user's requirements are presented in data form by particular format cards. Data are classified and the relationships between data are decided through brainstorming. Then, a requirement definition model is defined. FS methodology employs diagrammatic notation. This notation is suitable for the visual and the interactive description of the dynamic system data flow. As a result, misunderstandings of the software system between the software producer and software user will decrease.
Algebraic tools for the performance evaluation of discrete event systems In this paper, it is shown that a certain class of Petri nets called event graphs can be represented as linear "time-invariant" flnite-dimensional sys- tems using some particular algebras. This sets the ground on which a theory of these systems can be developped in a manner which is very analogous to that of conventional linear system theory. Part 2 of the paper is devoted to showing some preliminary basic developments in that direction. Indeed, there are several ways in which one can consider event graphs as linear sys- tems: these ways correspond to approaches in the time domain, in the event domain and in a two-dimensional domain. In each of these approaches, a difierent algebra has to be used for models to remain linear. However, the common feature of these algebras is that they all fall into the axiomatic deflnition of "dioids". Therefore, Part 1 of the paper is devoted to a unifled presentation of basic algebraic results on dioids.
An Effective Implementation for the Generalized Input-Output Construct of CSP
Inconsistency Handling in Multiperspective Specifications The development of most large and complex systems necessarily involves many people - each with their own perspectives on the system defined by their knowledge, responsibilities, and commitments. To address this we have advocated distributed development of specification s from multiple perspectives. However, this leads to problems of identifying and handling inconsistencies between such perspectives. Maintaining absolute consistency is not always possible. Often this is not even desirable since this can unnecessarily constrain the development process, and can lead to the loss of important information. Indeed since the real-world forces us to work with inconsistencies, we should formalise some of the usually informal or extra-logical ways of responding to them. This is not necessarily done by eradicating inconsistencies but rather by supplying logical rules specifying how we should act on them. To achieve this, we combine two lines of existing research: the ViewPoints framework for perspective development, interaction and organisation, and a logic-based approach to inconsistency handling. This paper presents our technique for inconsistency handling in the ViewPoints framework by using simple examples.
3rd international workshop on software evolution through transformations: embracing change Transformation-based techniques such as refactoring, model transformation and model-driven development, architectural reconfiguration, etc. are at the heart of many software engineering activities, making it possible to cope with an ever changing environment. This workshop provides a forum for discussing these techniques, their formal foundations and applications.
LANSF: a protocol modelling environment and its implementation SUMMARY LANSF is a software package that was originally designed as a tool to investigate the behaviour of medium access control (MAC) level protocols. These protocols form an interesting class of distributed computations: timing of events is the key factor in them. The protocol definition language of LANSF is based on C, and protocols are specified (programmed) as collections of communicating, interrupt-driven processes. These specifications are executable: an event-driven emulator of MAC-level communication phenomena forms the foundation of the implementation. Some tools for debugging, testing, and validation of protocol specifications are provided. We present key features of LANSF at the syntactic level, comment informally on the semantics of these features, and highlight some implementation issues. A complete example of a LANSF application is discussed in the Appendix.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.11525
0.0575
0.038444
0.014437
0.000295
0.000067
0.000017
0.000001
0
0
0
0
0
0
A dynamic bayesian network click model for web search ranking As with any application of machine learning, web search ranking requires labeled data. The labels usually come in the form of relevance assessments made by editors. Click logs can also provide an important source of implicit feedback and can be used as a cheap proxy for editorial labels. The main difficulty however comes from the so called position bias - urls appearing in lower positions are less likely to be clicked even if they are relevant. In this paper, we propose a Dynamic Bayesian Network which aims at providing us with unbiased estimation of the relevance from the click logs. Experiments show that the proposed click model outperforms other existing click models in predicting both click-through rate and relevance.
Evaluating document clustering for interactive information retrieval We consider the problem of organizing and browsing the top ranked portion of the documents returned by an information retrieval system. We study the effectiveness of a document organization in helping a user to locate the relevant material among the retrieved documents as quickly as possible. In this context we examine a set of clustering algorithms and experimentally show that a clustering of the retrieved documents can be significantly more effective than traditional ranked list approach. We also show that the clustering approach can be as effective as the interactive relevance feedback based on query expansion while retaining an important advantage -- it provides the user with a valuable sense of control over the feedback process.
Characterizing search intent diversity into click models Modeling a user's click-through behavior in click logs is a challenging task due to the well-known position bias problem. Recent advances in click models have adopted the examination hypothesis which distinguishes document relevance from position bias. In this paper, we revisit the examination hypothesis and observe that user clicks cannot be completely explained by relevance and position bias. Specifically, users with different search intents may submit the same query to the search engine but expect different search results. Thus, there might be a bias between user search intent and the query formulated by the user, which can lead to the diversity in user clicks. This bias has not been considered in previous works such as UBM, DBN and CCM. In this paper, we propose a new intent hypothesis as a complement to the examination hypothesis. This hypothesis is used to characterize the bias between the user search intent and the query in each search session. This hypothesis is very general and can be applied to most of the existing click models to improve their capacities in learning unbiased relevance. Experimental results demonstrate that after adopting the intent hypothesis, click models can better interpret user clicks and achieve a significant NDCG improvement.
Rank and relevance in novelty and diversity metrics for recommender systems The Recommender Systems community is paying increasing attention to novelty and diversity as key qualities beyond accuracy in real recommendation scenarios. Despite the raise of interest and work on the topic in recent years, we find that a clear common methodological and conceptual ground for the evaluation of these dimensions is still to be consolidated. Different evaluation metrics have been reported in the literature but the precise relation, distinction or equivalence between them has not been explicitly studied. Furthermore, the metrics reported so far miss important properties such as taking into consideration the ranking of recommended items, or whether items are relevant or not, when assessing the novelty and diversity of recommendations. We present a formal framework for the definition of novelty and diversity metrics that unifies and generalizes several state of the art metrics. We identify three essential ground concepts at the roots of novelty and diversity: choice, discovery and relevance, upon which the framework is built. Item rank and relevance are introduced through a probabilistic recommendation browsing model, building upon the same three basic concepts. Based on the combination of ground elements, and the assumptions of the browsing model, different metrics and variants unfold. We report experimental observations which validate and illustrate the properties of the proposed metrics.
On Clustering Validation Techniques Cluster analysis aims at identifying groups of similar objects and, therefore helps to discover distribution of patterns and interesting correlations in large data sets. It has been subject of wide research since it arises in many application domains in engineering, business and social sciences. Especially, in the last years the availability of huge transactional and experimental data sets and the arising requirements for data mining created needs for clustering algorithms that scale and can be applied in diverse domains.This paper introduces the fundamental concepts of clustering while it surveys the widely known clustering algorithms in a comparative way. Moreover, it addresses an important issue of clustering process regarding the quality assessment of the clustering results. This is also related to the inherent features of the data set under concern. A review of clustering validity measures and approaches available in the literature is presented. Furthermore, the paper illustrates the issues that are under-addressed by the recent algorithms and gives the trends in clustering process.
Cumulated gain-based evaluation of IR techniques Modern large retrieval environments tend to overwhelm their users by their large output. Since all documents are not of equal relevance to their users, highly relevant documents should be identified and ranked first for presentation. In order to develop IR techniques in this direction, it is necessary to develop evaluation approaches and methods that credit IR methods for their ability to retrieve highly relevant documents. This can be done by extending traditional evaluation methods, that is, recall and precision based on binary relevance judgments, to graded relevance judgments. Alternatively, novel measures based on graded relevance judgments may be developed. This article proposes several novel measures that compute the cumulative gain the user obtains by examining the retrieval result up to a given ranked position. The first one accumulates the relevance scores of retrieved documents along the ranked result list. The second one is similar but applies a discount factor to the relevance scores in order to devaluate late-retrieved documents. The third one computes the relative-to-the-ideal performance of IR techniques, based on the cumulative gain they are able to yield. These novel measures are defined and discussed and their use is demonstrated in a case study using TREC data: sample system run results for 20 queries in TREC-7. As a relevance base we used novel graded relevance judgments on a four-point scale. The test results indicate that the proposed measures credit IR methods for their ability to retrieve highly relevant documents and allow testing of statistical significance of effectiveness differences. The graphs based on the measures also provide insight into the performance IR techniques and allow interpretation, for example, from the user point of view.
Filter keywords and majority class strategies for company name disambiguation in twitter Monitoring the online reputation of a company starts by retrieving all (fresh) information where the company is mentioned; and a major problem in this context is that company names are often ambiguous (apple may refer to the company, the fruit, the singer, etc.). The problem is particularly hard in microblogging, where there is little context to disambiguate: this was the task addressed in the WePS-3 CLEF lab exercise in 2010. This paper introduces a novel fingerprint representation technique to visualize and compare system results for the task. We apply this technique to the systems that originally participated in WePS-3, and then we use it to explore the usefulness of filter keywords (those whose presence in a tweet reliably signals either the positive or the negative class) and finding the majority class (whether positive or negative tweets are predominant for a given company name in a tweet stream) as signals that contribute to address the problem. Our study shows that both are key signals to solve the task, and we also find that, remarkably, the vocabulary associated to a company in the Web does not seem to match the vocabulary used in Twitter streams: even a manual extraction of filter keywords from web pages has substantially lower recall than an oracle selection of the best terms from the Twitter stream.
"Piaf" vs "Adele": classifying encyclopedic queries using automatically labeled training data Encyclopedic queries express the intent of obtaining information typically available in encyclopedias, such as biographical, geographical or historical facts. In this paper, we train a classifier for detecting the encyclopedic intent of web queries. For training such a classifier, we automatically label training data from raw query logs. We use click-through data to select positive examples of encyclopedic queries as those queries that mostly lead to Wikipedia articles. We investigated a large set of features that can be generated to describe the input query. These features include both term-specific patterns as well as query projections on knowledge bases items (e.g. Freebase). Results show that using these feature sets it is possible to achieve an F1 score above 87%, competing with a Google-based baseline, which uses a much wider set of signals to boost the ranking of Wikipedia for potential encyclopedic queries. The results also show that both query projections on Wikipedia article titles and Freebase entity match represent the most relevant groups of features. When the training set contains frequent positive examples (i.e rare queries are excluded) results tend to improve.
An image multiresolution representation for lossless and lossy compression We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bit-shift operations. During its calculation, the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropy-coding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for progressive transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate versus distortion performance is comparable to those of the most efficient lossy compression methods.
Protocol Verification Via Projections The method of projections is a new approach to reduce the complexity of analyzing nontrivial communication protocols. A protocol system consists of a network of protocol entities and communication channels. Protocol entities interact by exchanging messages through channels; messages in transit may be lost, duplicated as well as reordered. Our method is intended for protocols with several distinguishable functions. We show how to construct image protocols for each function. An image protocol is specified just like a real protocol. An image protocol system is said to be faithful if it preserves all safety and liveness properties of the original protocol system concerning the projected function. An image protocol is smaller than the original protocol and can typically be more easily analyzed. Two protocol examples are employed herein to illustrate our method. An application of this method to verify a version of the high-level data link control (HDLC) protocol is described in a companion paper.
Using Abstraction and Model Checking to Detect Safety Violations in Requirements Specifications Exposing inconsistencies can uncover many defects in software specifications. One approach to exposing inconsistencies analyzes two redundant specifications, one operational and the other property-based, and reports discrepancies. This paper describes a "practical" formal method, based on this approach and the SCR (Software Cost Reduction) tabular notation, that can expose inconsistencies in software requirements specifications. Because users of the method do not need advanced mathematical training or theorem proving skills, most software developers should be able to apply the method without extraordinary effort. This paper also describes an application of the method which exposed a safety violation in the contractor-produced software requirements specification of a sizable, safety-critical control system. Because the enormous state space of specifications of practical software usually renders direct analysis impractical, a common approach is to apply abstraction to the specification. To reduce the state space of the control system specification, two "pushbutton" abstraction methods were applied, one which automatically removes irrelevant variables and a second which replaces the large, possibly infinite, type sets of certain variables with smaller type sets. Analyzing the reduced specification with the model checker Spin uncovered a possible safety violation. Simulation demonstrated that the safety violation was not spurious but an actual defect in the original specification.
Software Tools and Environments Any system that assists the program-mer with some aspect of programming can be considered a programming tool. Similarly, a system that assists in some phase of the software development pro-cess can be considered a software tool. A programming environment is a suite of programming tools designed to simplify programming and thereby enhance pro-grammer productivity. A software engi-neering environment extends this to software tools and the whole software development process. Software tools are categorized by the phase of software development and the particular problems that they address. Software environments are character-ized by the type and kinds of tools they contain and thus the aspects of software development they address. Additionally, software environments are distin-guished by how the tools they include are related, that is, the type and degree of integration among the tools, and by the size and nature of the systems they are designed to address. Software tools and environments are designed to enhance productivity. Many tools do this directly by automating or simplifying some task. Others do it indi-rectly, either by facilitating more pow-erful programming languages, architec-tures, or systems, or by making the software development task more enjoy-able. Still others attempt to enhance productivity by providing the user with information that might be needed for the task at hand.
A component-based framework for modeling and analyzing probabilistic real-time systems A challenging research issue of analyzing a real-time system is to model the tasks composing the system and the resource provided to the system. In this paper, we propose a probabilistic component-based model which abstracts in the interfaces both the functional and non-functional requirements of such systems. This approach allows designers to unify in the same framework probabilistic scheduling techniques and compositional guarantees that go from soft to hard real-time. We provide sufficient schedulability tests for task systems using such framework when the scheduler is either preemptive Fixed-Priority or Earliest Deadline First.
Analysis and Design of Secure Massive MIMO Systems in the Presence of Hardware Impairments. To keep the hardware costs of future communications systems manageable, the use of low-cost hardware components is desirable. This is particularly true for the emerging massive multiple-input multiple-output (MIMO) systems which equip base stations (BSs) with a large number of antenna elements. However, low-cost transceiver designs will further accentuate the hardware impairments, which are presen...
1.105504
0.10045
0.052843
0.050225
0.02558
0.013885
0.000208
0.000091
0
0
0
0
0
0
Some properties of sequential predictors for binary Markov sources Universal predictions of the next outcome of a binary sequence drawn from a Markov source with unknown parameters is considered. For a given source, the predictability is defined as the least attainable expected fraction of prediction errors. A lower bound is derived on the maximum rate at which the predictability is asymptotically approached uniformly over all sources in the Markov class. This bound is achieved by a simple majority predictor. For Bernoulli sources, bounds on the large deviations performance are investigated. A lower bound is derived for the probability that the fraction of errors will exceed the predictability by a prescribed amount Δ>0. This bound is achieved by the same predictor if Δ is sufficiently small
Fast Constant Division Routines When there is no division circuit available, the arithmetical function of division is normally performed by a library subroutine. The library subroutine normally allows both the divisor and the dividend to be variables, and requires the execution of hundreds of assembly instructions. This correspondence provides a fast algorithm for performing the integer division of a variable by a predetermined divisor. Based upon this algorithm, an efficient division routine has been constructed for each odd divisor up to 55. These routines may be implemented in assembly languages, in microcodes, and in special-purpose circuits.
Relations between entropy and error probability The relation between the entropy of a discrete random variable and the minimum attainable probability of error made in guessing its value is examined. While Fano's inequality provides a tight lower bound on the error probability in terms of the entropy, the present authors derive a converse result-a tight upper bound on the minimal error probability in terms of the entropy. Both bounds are sharp, and can draw a relation, as well, between the error probability for the maximum a posteriori (MAP) rule, and the conditional entropy (equivocation), which is a useful uncertainty measure in several applications. Combining this relation and the classical channel coding theorem, the authors present a channel coding theorem for the equivocation which, unlike the channel coding theorem for error probability, is meaningful at all rates. This theorem is proved directly for DMCs, and from this proof it is further concluded that for R&ges;C the equivocation achieves its minimal value of R-C at the rate of n1/2 where n is the block length
Sources Which Maximize the Choice of a Huffman Coding Tree
Generalized kraft inequality and arithmetic coding Algorithms for encoding and decoding finite strings over a finite alphabet are described. The coding operations are arithmetic involving rational numbers li as parameters such that ∑i2−li≤2−ε. This coding technique requires no blocking, and the per-symbol length of the encoded string approaches the associated entropy within ε. The coding speed is comparable to that of conventional coding methods.
On the JPEG Model for Lossless Image Compression
Context-based, adaptive, lossless image coding We propose a context-based, adaptive, lossless image codec (CALIC). The codec obtains higher lossless compression of continuous-tone images than other lossless image coding techniques in the literature. This high coding efficiency is accomplished with relatively low time and space complexities. The CALIC puts heavy emphasis on image data modeling. A unique feature of the CALIC is the use of a large number of modeling contexts (states) to condition a nonlinear predictor and adapt the predictor to varying source statistics. The nonlinear predictor can correct itself via an error feedback mechanism by learning from its mistakes under a given context in the past. In this learning process, the CALIC estimates only the expectation of prediction errors conditioned on a large number of different contexts rather than estimating a large number of conditional error probabilities. The former estimation technique can afford a large number of modeling contexts without suffering from the context dilution problem of insufficient counting statistics as in the latter approach, nor from excessive memory use. The low time and space complexities are also attributed to efficient techniques for forming and quantizing modeling contexts
Stationary probability model for bitplane image coding through local average of wavelet coefficients. This paper introduces a probability model for symbols emitted by bitplane image coding engines, which is conceived from a precise characterization of the signal produced by a wavelet transform. Main insights behind the proposed model are the estimation of the magnitude of wavelet coefficients as the arithmetic mean of its neighbors' magnitude (the so-called local average), and the assumption that emitted bits are under-complete representations of the underlying signal. The local average-based probability model is introduced in the framework of JPEG2000. While the resulting system is not JPEG2000 compatible, it preserves all features of the standard. Practical benefits of our model are enhanced coding efficiency, more opportunities for parallelism, and improved spatial scalability.
New simple and efficient color space transformations for lossless image compression. •New transformation requiring 4 operations per pixel gave the best overall ratios.•New transformation done in 2 operations gave the best JPEG2000 and JPEG XR ratios.•A transformation from human vision system outperformed established ones.•PCA/KLT resulted in ratios inferior to ratios of new and established transformations.
3D medical image compression based on multiplierless low-complexity RKLT and shape-adaptive wavelet transform A multiplierless low complexity reversible integer Karhunen-Loe¿ve transform (Low-RKLT) is proposed based on matrix factorization. Conventional methods based on KLT suffer from high computational complexity and unability of applying in lossless medical image compression. To solve the two problems, multiplierless Low-RKLT is investigated using multi-lifting in this paper. Combined with ROI coding method, we have proposed a progressive lossy-to-lossless ROI compression method for three dimensional (3D) medical images with high performance. In our proposed method Low-RKLT is used for the inter-frame decorrelation after SA-DWT in the spatial domain. Simulation results show that, the proposed method performs much better in both lossless and lossy compression than 3D-DWT-based method.
Retrenchment: An Engineering Variation on Refinement It is argued that refinement, in which I/O signatures stay the same, preconditions are weakened and postconditions strengthened, is too restrictive to describe all but a fraction of many realistic developments. An alternative notion is proposed called retrenchment, which allows information to migrate between I/O and state aspects of operations at different levels of abstraction, and which allows only a fraction of the high level behaviour to be captured at the low level. This permits more of the informal aspects of design to be formally captured and checked. The details are worked out for the B-Method.
A task allocation model for distributed computing systems This paper presents a task allocation model that allocates application tasks among processors in distributed computing systems satisfying: 1) minimum interprocessor communication cost, 2) balanced utilization of each processor, and 3) all engineering application requirements.
Distributed Mobile Communication Base Station Diagnosis and Monitoring Using Multi-agents Most inherently distributed systems require self diagnosis and on-line monitoring. This is especially true in the domains of power transmission and mobile communication. Much effort has been expended in developing on-site monitoring systems for distributed power transformers and mobile communication base stations.In this paper, a new approach has been employed to implement the autonomous self diagnosis and on-site monitoring using multi-agents on mobile communication base stations.
The Use of Machine Learning Algorithms in Recommender Systems: A Systematic Review. •A survey of machine learning (ML) algorithms in recommender systems (RSs) is provided.•The surveyed studies are classified in different RS categories.•The studies are classified based on the types of ML algorithms and application domains.•The studies are also analyzed according to main and alternative performance metrics.•LNCS and EWSA are the main sources of studies in this research field.
1.07391
0.080071
0.065621
0.026868
0.020073
0.010092
0.000347
0.000033
0.000017
0.000004
0
0
0
0
Semantic Web and Knowledge Management in User Data Privacy.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Personal and Contextual Requirements Engineering A framework for requirements analysis is proposed that accounts for individual and personal goals and the effect of time and context on personal requirements. The implications of the framework on system architecture are considered as three implementation pathways: functional specifications, development of customisable features and automatic adaptation by the system. These pathways imply the need to analyse system architecture requirements. Different implementation pathways have cost-benefit implications for stakeholders, so cost-benefit analysis techniques are proposed to assess trade-offs between goals and implementation strategies. The use of the framework is illustrated with two case studies in assistive technology domains: e-mail and a personalised navigation system.
A requirements monitoring framework for enterprise systems Requirements compliant software is becoming a necessity. Fewer and fewer organizations will run their critical transactions on software that has no visible relationship to its requirements. Businesses wish to see their software being consistent with their policies. Moreover, partnership agreements are pressuring less mature organizations to improve their systems. Businesses that rely on web services, for example, are vulnerable to the problems of their web service providers. While electronic commerce has increased the speed of on-line transactions, the technology for monitoring requirements compliance—especially for transactions—has lagged behind. To address the requirements monitoring problem for enterprise information systems, we integrate techniques for requirements analysis and software execution monitoring. Our framework assists analysts in the development of requirements monitors for enterprise services. The deployed system raises alerts when services succeed or fail to satisfy their specified requirements, thereby making software requirements visible. The framework usage is demonstrated with an analysis of ebXML marketplace specifications. An analyst applies goal analysis to discover potential service obstacles, and then derives requirements monitors and a distributed monitoring system. Once deployed, the monitoring system provides alerts when obstacles occur. A summary of the framework implementation is presented, along with analysis of two monitor component implementations. We conclude that the approach implemented in the framework, called ReqMon, provides real-time feedback on requirements satisfaction, and thereby provides visibility into requirements compliance of enterprise information systems.
Goal-Oriented Requirements Enginering: A Roundtrip from Research to Practice The software industry is more than ever facing the challenge of delivering WYGIWYW software (What You Get Is What You Want). A well-structured document specifying adequate, complete, consistent, precise, and measurable requirements is a critical prerequisite for such software. Goals have been recognized to be among the driving forces for requirements elicitation, elaboration, organization, analysis, negotiation, documentation, and evolution. Growing experience with goal-oriented requirements engineering suggests synergistic links between research in this area and good practice. We discuss one journey along this road from influencing ideas and research results to tool developments to good practice in industrial projects. On the way, we discuss some lessons learnt, obstacles to technonogy transfer, and challenges for better requirements engineering research and practice.
Handling Obstacles in Goal-Oriented Requirements Engineering Requirements engineering is concerned with the elicitation of high-level goals to be achieved by the envisioned system, the refinement of such goals and their operationalization into specifications of services and constraints and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. Requirements engineering processes often result in goals, requirements, and assumptions about agent behavior that are too ideal; some of them are likely not to be satisfied from time to time in the running system due to unexpected agent behavior. The lack of anticipation of exceptional behaviors results in unrealistic, unachievable, and/or incomplete requirements. As a consequence, the software developed from those requirements will not be robust enough and will inevitably result in poor performance or failures, sometimes with critical consequences on the environment. This paper presents formal techniques for reasoning about obstacles to the satisfaction of goals, requirements, and assumptions elaborated in the requirements engineering process. A first set of techniques allows obstacles to be generated systematically from goal formulations and domain properties. A second set of techniques allows resolutions to be generated once the obstacles have been identified thereby. Our techniques are based on a temporal logic formalization of goals and domain properties; they are integrated into an existing method for goal-oriented requirements elaboration with the aim of deriving more realistic, complete, and robust requirements specifications. A key principle in this paper is to handle exceptions at requirements engineering time and at the goal level, so that more freedom is left for resolving them in a satisfactory way. The various techniques proposed are illustrated and assessed in the context of a real safety-critical system.
On the interplay between consistency, completeness, and correctness in requirements evolution The initial expression of requirements for a computer-based system is often informal and possibly vague. Requirements engineers need to examine this often incomplete and inconsistent brief expression of needs. Based on the available knowledge and expertise, assumptions are made and conclusions are deduced to transform this ‘rough sketch’ into more complete, consistent, and hence correct requirements. This paper addresses the question of how to characterize these properties in an evolutionary framework, and what relationships link these properties to a customer's view of correctness. Moreover, we describe in rigorous terms the different kinds of validation checks that must be performed on different parts of a requirements specification in order to ensure that errors (i.e. cases of inconsistency and incompleteness) are detected and marked as such, leading to better quality requirements.
Integrating multiple specifications using domain goals Design is a process which inherently involves tradeoffs. We are currently pursuing a model of specification design which advocates the integration of multiple perspectives of a system. We have mapped the integration problem onto the negotiation problem of many issues between many agents in order to apply known resolution techniques. Part of that mapping requires the modeling of domain goals which serve as issues for negotiation. Herein, we describe the use of domain goals in our conflict resolution process which is applied during the integration of specifications.
Designing And Building A Negotiating Automated Agent Negotiations are very important in a multiagent environment, particularly, in an environment where there are conflicts between the agents, and cooperation would be beneficial. We have developed a general structure for a Negotiating Automated Agent that consists of five modules: a Prime Minister, a Ministry of Defense, a Foreign Office, a Headquarters and Intelligence. These modules are implemented using a dynamic set of local agents belonging to the different modules. We used this structure to develop a Diplomacy player, Diplomat. Playing Diplomacy involves a certain amount of technical skills as in other board games, but the capacity to negotiate, explain, convince, promise, keep promises or break them, is an essential ingredient in good play. Diplomat was evaluated and consistently played better than human players.
A program integration algorithm that accommodates semantics-preserving transformations Given a program Base and two variants, A and B, each created by modifying separate copies of Base, the goal of program integration is to determine whether the modifications interfere, and if they do not, to create an integrated program that includes both sets of changes as well as the portions of Base preserved in both variants. Text-based integration techniques, such as the one used by the UNIX diff3 utility, are obviously unsatisfactory because one has no guarantees about how the execution behavior of the integrated program relates to the behaviors of Base, A, and B. The first program-integration algorithm to provide such guarantees was developed by Horwitz, Prins, and Reps. However, a limitation of that algorithm is that it incorporates no notion of semantics-preserving transformations. This limitation causes the algorithm to be overly conservative in its definition of interference. For example, if one variant changes the way a computation is performed (without changing the values computed) while the other variant adds code that uses the result of the computation, the algorithm would classify those changes as interfering. This paper describes a new integration algorithm that is able to accommodate semantics-preserving transformations.
An analytic framework for specifying and analyzing imprecise requirements There are at least three challenges with requirements analysis. First, it needs to bridge informal requirements, which are often vague and imprecise, to formal specification methods. Second, requirements often conflict with each other. Third, existing formal requirement specification methodologies are limited in supporting trade-off analysis between conflicting requirements and identifying the impact of a requirement change to the rest of the system. In this paper, an analytic framework is developed for the specification and analysis of imprecise requirements. In this framework, the elasticity of imprecise requirements is captured using fuzzy logic and the relationships between requirements are formally classified into four categories: conflicting, cooperative, mutually exclusive and irrelevant. This formal foundation facilitates the inference of relationships between requirements for detecting implicit conflicts, to assess the relative priorities of requirements for resolving conflicts, and to assess the effect of a requirement change.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
A system development methodology for knowledge-based systems Phased linear system-development methodologies inadequately address the problems of knowledge acquisition and engineering. A different approach, detailed and systematic, to building knowledge-based systems is presented, namely a knowledge-based system development life cycle. This specially tailored prototyping methodology replaces traditional phases and stages with 'processes'. Processes are activated, deactivated, and reactivated dynamically as needed during system development, thereby allowing knowledge engineers to iteratively define, develop, refine, and test an evolutionary knowledge/data representation. A case study in which this method was used is presented in which Blue Cross/Blue Shield of South Carolina and the Institute of Information Management, Technology, and Policy at the College of Business Administration, University of South Carolina, initiated a joint venture to automate the review process for medical insurance claims.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
Abstracto 84: The next generation Programming languages are not an ideal vehicle for expressing algorithms. This paper sketches how a language Abstracto might be developed for “algorithmic expressions” that may be manipulated by the rules of “algorithmics”, quite similar to the manipulation of mathematical expressions in mathematics. Two examples are given of “abstract” algorithmic expressions that are not executable in the ordinary sense, but may be used in the derivation of programs. It appears that the notion of “refinement” may be replaced by a weaker notion for abstract algorithmic expressions, corresponding also to a weaker notion of “weakest precondition”.
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.24
0.12
0.093333
0.01039
0.005714
0.001545
0.000533
0.000267
0.000133
0
0
0
0
0
Languages for the specification of software A variety of specification languages exist that support one or more phases of software development. This article emphasizes languages that support the functional phase, i.e., languages that can be used to define the observable behavior of a system. The languages surveyed include Z, Prolog, SF, Clear, Larch, PAISLey, Spec, CSP, SEGRAS and BagL. The article divides the languages into four major categories based on the way the language specifies the external behavior of the system and on the ability of the language to specify concurrent systems. Each language section includes a discussion of the constructs of the language, a specification of a problem in the language, and an evaluation of the language. The article is intended to acquaint the reader with a wide range of functional specification languages.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Pre-specified performance based model reduction for time-varying delay systems in fuzzy framework This paper attempts to provide a new solution to the model approximation problem for dynamic systems with time-varying delays under the fuzzy framework. For a given high-order system, our focus is on the construction of a reduced-order model, which approximates the original one in a prescribed error performance level and guarantees the asymptotic stability of the corresponding error system. Based on the reciprocally convex technique, a less conservative stability condition is established for the dynamic error system with a given error performance index. Furthermore, the reduced-order model is eventually obtained by applying the projection approach, which converts the model approximation into a sequential minimization problem subject to linear matrix inequality constraints by employing the cone complementary linearization algorithm. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed method.
New approach on robust delay-dependent H∞ control for uncertain T-S fuzzy systems with interval time-varying delay This paper investigates the robust H∞ control for Takagi-Sugeno (T-S) fuzzy systems with interval time-varying delay. By employing a new and tighter integral inequality and constructing an appropriate type of Lyapunov functional, delay-dependent stability criteria are derived for the control problem. Because neither any model transformation nor free weighting matrices are employed in our theoretical derivation, the developed stability criteria significantly improve and simplify the existing stability conditions. Also, the maximum allowable upper delay bound and controller feedback gains can be obtained simultaneously from the developed approach by solving a constrained convex optimization problem. Numerical examples are given to demonstrate the effectiveness of the proposed methods.
New approaches on H∞ control of T--S fuzzy systems with interval time-varying delay This paper considers the delay-dependent stabilization problems and H"~ control problems for uncertain T-S fuzzy system with time-varying delay in a range. A new method is proposed by defining new Lyapunov functionals and introducing some integral inequalities. The merit of the proposed conditions lies in the less conservativeness than the existing ones, which is achieved by paying careful attention to the subtle difference between the two integral terms, which is always ignored in the existing literatures, and the free variables which replace the Leibniz-Newton formula with integral inequalities. The fuzzy state feedback gains are derived through the numerical solution of a set of linear matrix inequalities (LMIs). The efficiency and the benefits of our method are demonstrated by the numerical examples.
New delay-dependent stability criteria for T--S fuzzy systems with time-varying delay This paper is concerned with the stability problem of uncertain T-S fuzzy systems with time-varying delay by employing a further improved free-weighting matrix approach. By taking the relationship among the time-varying delay, its upper bound and their difference into account, some less conservative LMI-based delay-dependent stability criteria are obtained without ignoring any useful terms in the derivative of Lyapunov-Krasovskii functional. Finally, two numerical examples are given to demonstrate the effectiveness and the merits of the proposed methods.
Stability and stabilization of delayed T-S fuzzy systems: a delay partitioning approach This paper proposes a new approach, namely, the delay partitioning approach, to solving the problems of stability analysis and stabilization for continuous time-delay Takagi-Sugeno fuzzy systems. Based on the idea of delay fractioning, a new method is proposed for the delay-dependent stability analysis of fuzzy time-delay systems. Due to the instrumental idea of delay partitioning, the proposed stability condition is much less conservative than most of the existing results. The conservatism reduction becomes more obvious with the partitioning getting thinner. Based on this, the problem of stabilization via the so-called parallel distributed compensation scheme is also solved. Both the stability and stabilization results are further extended to time-delay fuzzy systems with time-varying parameter uncertainties. All the results are formulated in the form of linear matrix inequalities (LMIs), which can be readily solved via standard numerical software. The advantage of the results proposed in this paper lies in their reduced conservatism, as shown via detailed illustrative examples. The idea of delay partitioning is well demonstrated to be efficient for conservatism reduction and could be extended to solving other problems related to fuzzy delay systems.
Stability and Stabilization of Discrete-Time T-S Fuzzy Systems With Time-Varying Delay via Cauchy-Schwartz-Based Summation Inequality. This paper proposes new stability and stabilization conditions for discrete-time fuzzy systems with time-varying delays. By constructing a suitable Lyapunov–Krasovskii functional and introducing a new summation inequality based on the inequality of Cauchy–Schwartz form, which enhances the feasible region of the stability criterion for discrete-time systems with time-varying delay, a stability criterion for such systems is established. In order to show the effectiveness of the proposed inequality, which provides more tight lower bound of a summation term of quadratic form, a delay-dependent stability criterion for such systems is derived within the framework of linear matrix inequalities, which can be easily solved by various effective optimization algorithms. Going one step forward, the proposed inequality is applied to a stabilization problem in discrete-time fuzzy systems with time-varying delays. The advantages of the proposed stability and stabilization criteria are illustrated via two numerical examples.
Discrete inequalities based on multiple auxiliary functions and their applications to stability analysis of time-delay systems This paper presents new discrete inequalities for single summation and double summation. These inequalities are based on multiple auxiliary functions and include the Jensen discrete inequality and the discrete Wirtinger-based inequality as special cases. An application of these discrete inequalities to analyze stability of linear discrete systems with an interval time-varying delay is studied and a less conservative stability condition is obtained. Three numerical examples are given to show the effectiveness of the obtained stability condition.
An Improved Input Delay Approach to Stabilization of Fuzzy Systems Under Variable Sampling In this paper, we investigate the problem of stabilization for sampled-data fuzzy systems under variable sampling. A novel Lyapunov–Krasovskii functional (LKF) is defined to capture the characteristic of sampled-data systems, and an improved input delay approach is proposed. By the use of an appropriate enlargement scheme, new stability and stabilization criteria are obtained in terms of linear matrix inequalities (LMIs). Compared with the existing results, the newly obtained ones contain less conservatism. Some illustrative examples are given to show the effectiveness of the proposed method and the significant improvement on the existing results.
Neural network control for a closed-loop System using Feedback-error-learning. This paper presents new learning schemes using feedback-error-learning for a neural network model applied to adaptive nonlinear feedback control. Feedback-error-learning was proposed as a learning method for forming a feedforward controller that uses the output of a feedback controller as the error for training a neural network model. Using new schemes for nonlinear feedback control, the actual responses after learning correspond to the desired responses which are defined by an inverse reference model implemented as a conventional feedback controller. In this respect, these methods are similar to Model Reference Adaptive Control (MRAC) applied to linear or linearized systems. It is shown that learning impedance control is derived when one proposed scheme is used in Cartesian space. We show the results of applying these learning schemes to an inverted pendulum and a 2-link manipulator. We also discuss the convergence properties of the neural network models employed in these learning schemes by applying the Lyapunov method to the averaged equations associated with the stochastic differential equations which describe the system dynamics.
On Overview of KRL, a Knowledge Representation Language
Requirements interaction management Requirements interaction management (RIM) is the set of activities directed toward the discovery, management, and disposition of critical relationships among sets of requirements, which has become a critical area of requirements engineering. This survey looks at the evolution of supporting concepts and their related literature, presents an issues-based framework for reviewing processes and products, and applies the framework in a review of RIM state-of-the-art. Finally, it presents seven research projects that exemplify this emerging discipline.
A meta-model for restructuring stakeholder requirements
Nonrepetitive colorings of trees A coloring of the vertices of a graph G is nonrepetitive if no path in G forms a sequence consisting of two identical blocks. The minimum number of colors needed is the Thue chromatic number, denoted by @p(G). A famous theorem of Thue asserts that @p(P)=3 for any path P with at least four vertices. In this paper we study the Thue chromatic number of trees. In view of the fact that @p(T) is bounded by 4 in this class we aim to describe the 4-chromatic trees. In particular, we study the 4-critical trees which are minimal with respect to this property. Though there are many trees T with @p(T)=4 we show that any of them has a sufficiently large subdivision H such that @p(H)=3. The proof relies on Thue sequences with additional properties involving palindromic words. We also investigate nonrepetitive edge colorings of trees. By a similar argument we prove that any tree has a subdivision which can be edge-colored by at most @D+1 colors without repetitions on paths.
Core of coalition formation games and fixed-point methods. In coalition formation games where agents have preferences over coalitions to which they belong, the set of fixed points of an operator and the core of coalition formation games coincide. An acyclicity condition on preference profiles guarantees the existence of a unique core. An algorithm using that operator finds all core partitions whenever there exists one.
1.201989
0.033877
0.025618
0.013157
0.007575
0.001739
0.000577
0.000248
0.000097
0
0
0
0
0
How Much Does I/Q Imbalance Affect Secrecy Capacity? Radio frequency front ends constitute a fundamental part of both conventional and emerging wireless systems. However, in spite of their importance, they are often assumed ideal, although they are practically subject to certain detrimental impairments, such as amplifier nonlinearities, phase noise, and in-phase and quadrature (I/Q) imbalance (IQI). This letter is devoted to the quantification and e...
Cooperative wireless communications: a cross-layer approach This article outlines one way to address these problems by using the notion of cooperation between wireless nodes. In cooperative communications, multiple nodes in a wireless network work together to form a virtual antenna array. Using cooperation, it is possible to exploit the spatial diversity of the traditional MIMO techniques without each node necessarily having multiple antennas. Multihop networks use some form of cooperation by enabling intermediate nodes to forward the message from source to destination. However, cooperative communication techniques described in this article are fundamentally different in that the relaying nodes can forward the information fully or in part. Also the destination receives multiple versions of the message from the source, and one or more relays and combines these to obtain a more reliable estimate of the transmitted signal as well as higher data rates. The main advantages of cooperative communications are presented
On the Performance of Cognitive Underlay Multihop Networks with Imperfect Channel State Information. This paper proposes and analyzes cognitive multihop decode-and-forward networks in the presence of interference due to channel estimation errors. To reduce interference on the primary network, a simple yet effective back-off control power method is applied for secondary multihop networks. For a given threshold of interference probability at the primary network, we derive the maximum back-off control power coefficient, which provides the best performance for secondary multihop networks. Moreover, it is shown that the number of hops for secondary network is upper-bounded under the fixed settings of the primary network. For secondary multihop networks, new exact and asymptotic expressions for outage probability (OP), bit error rate (BER) and ergodic capacity over Rayleigh fading channels are derived. Based on the asymptotic OP and BEP, a pivotal conclusion is reached that the secondary multihop network offers the same diversity order as compared with the network without back off. Finally, we verify the performance analysis through various numerical examples which confirm the correctness of our analysis for many channel and system settings and provide new insight into the design and optimization of cognitive multihop networks.
Robust Secure Beamforming in MISO Full-Duplex Two-Way Secure Communications Considering worst-case channel uncertainties, we investigate the robust secure beamforming design problem in multiple-input-single-output full-duplex two-way secure communications. Our objective is to maximize worst-case sum secrecy rate under weak secrecy conditions and individual transmit power constraints. Since the objective function of the optimization problem includes both convex and concave terms, we propose to transform convex terms into linear terms. We decouple the problem into four optimization problems and employ alternating optimization algorithm to obtain the locally optimal solution. Simulation results demonstrate that our proposed robust secure beamforming scheme outperforms the non-robust one. It is also found that when the regions of channel uncertainties and the individual transmit power constraints are sufficiently large, because of self-interference, the proposed two-way robust secure communication is proactively degraded to one-way communication.
Secrecy Outage Analysis for SIMO Underlay Cognitive Radio Networks over Generalized-K Fading Channels. In this letter, we consider a single-input multiple-output cognitive wiretap system over generalized-K channels, where the eavesdropper overhears the transmission from the secondary transmitter (ST) to the legitimate receiver. Both the primary user and the ST are equipped with a single antenna, whereas the legitimate and the eavesdropper receivers are equipped with multiple antennas. The close-for...
Artificial Noise-Aided Physical Layer Security in Underlay Cognitive Massive MIMO Systems with Pilot Contamination. In this paper, a secure communication model for cognitive multi-user massive multiple-input multiple-output (MIMO) systems with underlay spectrum sharing is investigated. A secondary (cognitive) multi-user massive MIMO system is operated by using underlay spectrum sharing within a primary (licensed) multi-user massive MIMO system. A passive multi-antenna eavesdropper is assumed to be eavesdropping upon either the primary or secondary confidential transmissions. To this end, a physical layer security strategy is provisioned for the primary and secondary transmissions via artificial noise (AN) generation at the primary base-station (PBS) and zero-forcing precoders. Specifically, the precoders are constructed by using the channel estimates with pilot contamination. In order to degrade the interception of confidential transmissions at the eavesdropper, the AN sequences are transmitted at the PBS by exploiting the excess degrees-of-freedom offered by its massive antenna array and by using random AN shaping matrices. The channel estimates at the PBS and secondary base-station (SBS) are obtained by using non-orthogonal pilot sequences transmitted by the primary user nodes (PUs) and secondary user nodes (SUs), respectively. Hence, these channel estimates are affected by intra-cell pilot contamination. In this context, the detrimental effects of intra-cell pilot contamination and channel estimation errors for physical layer secure communication are investigated. For this system set-up, the average and asymptotic achievable secrecy rate expressions are derived in closed-form. Specifically, these performance metrics are studied for imperfect channel state information (CSI) and for perfect CSI, and thereby, the secrecy rate degradation due to inaccurate channel knowledge and intra-cell pilot contamination is quantified. Our analysis reveals that a physical layer secure communication can be provisioned for both primary and secondary massive MIMO systems even with the channel estimation errors and pilot contamination.
A New Look at Dual-Hop Relaying: Performance Limits with Hardware Impairments. Physical transceivers have hardware impairments that create distortions which degrade the performance of communication systems. The vast majority of technical contributions in the area of relaying neglect hardware impairments and, thus, assume ideal hardware. Such approximations make sense in low-rate systems, but can lead to very misleading results when analyzing future high-rate systems. This paper quantifies the impact of hardware impairments on dual-hop relaying, for both amplify-and-forward and decode-and-forward protocols. The outage probability (OP) in these practical scenarios is a function of the effective end-to-end signal-to-noise-and-distortion ratio (SNDR). This paper derives new closed-form expressions for the exact and asymptotic OPs, accounting for hardware impairments at the source, relay, and destination. A similar analysis for the ergodic capacity is also pursued, resulting in new upper bounds. We assume that both hops are subject to independent but non-identically distributed Nakagami-m fading. This paper validates that the performance loss is small at low rates, but otherwise can be very substantial. In particular, it is proved that for high signal-to-noise ratio (SNR), the end-to-end SNDR converges to a deterministic constant, coined the SNDR ceiling, which is inversely proportional to the level of impairments. This stands in contrast to the ideal hardware case in which the end-to-end SNDR grows without bound in the high-SNR regime. Finally, we provide fundamental design guidelines for selecting hardware that satisfies the requirements of a practical relaying system.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
Constraint logic programming for reasoning about discrete event processes The purpose of this paper is to show that constraint logic programming is a useful computational logic for modeling, simulating, and verifying real-time discrete event processes. The designer's knowledge about discrete event processes can be represented by a constraint logic program in a fashion that stays close to the mathematical definition of the processes, and can be used to semiautomate verification of possibly infinite-state systems. The constraint language CPL( R ) is used to illustrate verification techniques.
Software process modeling: principles of entity process models
Animation of Object-Z Specifications with a Set-Oriented Prototyping Language
3rd international workshop on software evolution through transformations: embracing change Transformation-based techniques such as refactoring, model transformation and model-driven development, architectural reconfiguration, etc. are at the heart of many software engineering activities, making it possible to cope with an ever changing environment. This workshop provides a forum for discussing these techniques, their formal foundations and applications.
One VM to rule them all Building high-performance virtual machines is a complex and expensive undertaking; many popular languages still have low-performance implementations. We describe a new approach to virtual machine (VM) construction that amortizes much of the effort in initial construction by allowing new languages to be implemented with modest additional effort. The approach relies on abstract syntax tree (AST) interpretation where a node can rewrite itself to a more specialized or more general node, together with an optimizing compiler that exploits the structure of the interpreter. The compiler uses speculative assumptions and deoptimization in order to produce efficient machine code. Our initial experience suggests that high performance is attainable while preserving a modular and layered architecture, and that new high-performance language implementations can be obtained by writing little more than a stylized interpreter.
New results on stability analysis for systems with discrete distributed delay The integral inequality technique is widely used to derive delay-dependent conditions, and various integral inequalities have been developed to reduce the conservatism of the conditions derived. In this study, a new integral inequality was devised that is tighter than existing ones. It was used to investigate the stability of linear systems with a discrete distributed delay, and a new stability condition was established. The results can be applied to systems with a delay belonging to an interval, which may be unstable when the delay is small or nonexistent. Three numerical examples demonstrate the effectiveness and the smaller conservatism of the method.
1.24
0.24
0.24
0.24
0.24
0.24
0.08
0
0
0
0
0
0
0
Reusable abstractions for modeling languages Model-driven engineering proposes the use of models to describe the relevant aspects of the system to be built and synthesize the final application from them. Models are normally described using Domain-Specific Modeling Languages (DSMLs), which provide primitives and constructs of the domain. Still, the increasing complexity of systems has raised the need for abstraction techniques able to produce simpler versions of the models while retaining some properties of interest. The problem is that developing such abstractions for each DSML from scratch is time and resource consuming. In this paper, our goal is reducing the effort to provide modeling languages with abstraction mechanisms. For this purpose, we have devised some techniques, based on generic programming and domain-specific meta-modeling, to define generic abstraction operations that can be reused over families of modeling languages sharing certain characteristics. Abstractions can make use of clustering algorithms as similarity criteria for model elements. These algorithms can be made generic as well, and customized for particular languages by means of annotation models. As a result, we have developed a catalog of reusable abstractions using the proposed techniques, together with a working implementation in the MetaDepth multi-level meta-modeling tool. Our techniques and prototypes demonstrate that it is feasible to build reusable and adaptable abstractions, so that similar abstractions need not be developed from scratch, and their integration in new or existing modeling languages is less costly.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Speech Recognition Using Deep Neural Networks: A Systematic Review. Over the past decades, a tremendous amount of research has been done on the use of machine learning for speech processing applications, especially speech recognition. However, in the past few years, research has focused on utilizing deep learning for speech-related applications. This new area of machine learning has yielded far better results when compared to others in a variety of applications including speech, and thus became a very attractive area of research. This paper provides a thorough examination of the different studies that have been conducted since 2006, when deep learning first arose as a new area of machine learning, for speech applications. A thorough statistical analysis is provided in this review which was conducted by extracting specific information from 174 papers published between the years 2006 and 2018. The results provided in this paper shed light on the trends of research in this area as well as bring focus to new research topics.
IOT based wearable sensor for diseases prediction and symptom analysis in healthcare sector Humans with good health condition is some more difficult in today's life, because of changing food habit and environment. So we need awareness about the health condition to the survival. The health-support systems faces significant challenges like lack of adequate medical information, preventable errors, data threat, misdiagnosis, and delayed transmission. To overcome this problem, here we proposed wearable sensor which is connected to Internet of things (IoT) based big data i.e. data mining analysis in healthcare. Moreover, here we design Generalize approximate Reasoning base Intelligence Control (GARIC) with regression rules to gather the information about the patient from the IoT. Finally, Train the data to the Artificial intelligence (AI) with the use of deep learning mechanism Boltzmann belief network. Subsequently Regularization _ Genome wide association study (GWAS) is used to predict the diseases. Thus, if the people has affected by some diseases they will get warning by SMS, emails. Etc., after that they got some treatments and advisory from the doctors.
Language Teaching in 3D Virtual Worlds with Machinima: Reflecting on an Online Machinima Teacher Training Course AbstractThis article is based on findings arising from a large, two-year EU project entitled "Creating Machinima to Enhance Online Language Learning and Teaching" CAMELOT, which was the first to investigate the potential of machinima, a form of virtual filmmaking that uses screen captures to record activity in immersive 3D environments, for language teaching. The article examines interaction in two particular phases of the project: facilitator-novice teacher interaction in an online teacher training course which took place in Second Life and teachers' field-testing of machinima which arose from it. Examining qualitative data from interviews and screen recordings following two iterations of a 6-week online teacher training course which was designed to train novice teachers how to produce machinima and the evaluation of the field-testing, the article highlights the pitfalls teachers encountered and reinforces the argument that creating opportunities for pedagogical purposes in virtual worlds implies that teachers need to change their perspectives to take advantage of the affordances offered.
Resistance and Sexuality in Virtual Worlds: An LGBT Perspective Virtual worlds can provide a safe place for social movements of marginal and oppressed groups such as lesbian, gay, bisexual and transgender (LGBT). When the virtual safe places are under threat, the inhabitants of a virtual world register protests, which have critical implications for the real-world issues. The nature of emancipatory practices such as virtual protests in the digital realm research remains somewhat under-explored. Specifically, it remains to be seen how the oppressed communities such as LGBT take radical actions in virtual worlds in order to restore the imbalance of power. We conducted a 35-month netnographic study of an LGBT social movement in World of Warcraft. The lead researcher joined the LGBT social movement and data was captured through participant observations, discussion forums, and chat logs. Drawing on the critical theory of Michel Foucault, we present empirical evidence that illuminates emancipatory social movement practices in an online virtual world. The findings suggest that there are complex power relations in a virtual world and, when power balance is disrupted, LGBT players form complex ways to register protests, which invoke strategies to restore order in the virtual fields.
Word Play: A History Of Voice Interaction In Digital Games The use of voice interaction in digital games has a long and varied history of experimentation but has never achieved sustained, widespread success. In this article, we review the history of voice interaction in digital games from a media archaeology perspective. Through detailed examination of publicly available information, we have identified and classified all games that feature some form of voice interaction and have received a public release. Our analysis shows that the use of voice interaction in digital games has followed a tidal pattern: rising and falling in seven distinct phases in response to new platforms and enabling technologies. We note characteristic differences in the way Japanese and Western game developers have used voice interaction to create different types of relationships between players and in-game characters. Finally, we discuss the implications for game design and scholarship in light of the increasing ubiquity of voice interaction systems.
Multi-layer security of medical data through watermarking and chaotic encryption for tele-health applications. In this paper, we present a robust and secure watermarking approach using transform domain techniques for tele-health applications. The patient report/identity is embedding into the host medical image for the purpose of authentication, annotation and identification. For better confidentiality, we apply the chaos based encryption algorithm on watermarked image in a less complex manner. Experimental results clearly indicated that the proposed technique is highly robust and sufficient secure for various forms of attacks without any significant distortions between watermarked and cover image. Further, the performance evaluation of our method is found better to existing state-of-the-art watermarking techniques under consideration. Furthermore, quality analysis of the watermarked image is estimated by subjective measure which is beneficial in quality driven healthcare industry.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
The contract net protocol: high-level communication and control in a distributed problem solver The contract net protocol has been developed to specify problem-solving communication and control for nodes in a distributed problem solver. Task distribution is affected by a negotiation process, a discussion carried on between nodes with tasks to be executed and nodes that may be able to execute those tasks.
Cumulated gain-based evaluation of IR techniques Modern large retrieval environments tend to overwhelm their users by their large output. Since all documents are not of equal relevance to their users, highly relevant documents should be identified and ranked first for presentation. In order to develop IR techniques in this direction, it is necessary to develop evaluation approaches and methods that credit IR methods for their ability to retrieve highly relevant documents. This can be done by extending traditional evaluation methods, that is, recall and precision based on binary relevance judgments, to graded relevance judgments. Alternatively, novel measures based on graded relevance judgments may be developed. This article proposes several novel measures that compute the cumulative gain the user obtains by examining the retrieval result up to a given ranked position. The first one accumulates the relevance scores of retrieved documents along the ranked result list. The second one is similar but applies a discount factor to the relevance scores in order to devaluate late-retrieved documents. The third one computes the relative-to-the-ideal performance of IR techniques, based on the cumulative gain they are able to yield. These novel measures are defined and discussed and their use is demonstrated in a case study using TREC data: sample system run results for 20 queries in TREC-7. As a relevance base we used novel graded relevance judgments on a four-point scale. The test results indicate that the proposed measures credit IR methods for their ability to retrieve highly relevant documents and allow testing of statistical significance of effectiveness differences. The graphs based on the measures also provide insight into the performance IR techniques and allow interpretation, for example, from the user point of view.
Programming Concepts, Methods and Calculi, Proceedings of the IFIP TC2/WG2.1/WG2.2/WG2.3 Working Conference on Programming Concepts, Methods and Calculi (PROCOMET '94) San Miniato, Italy, 6-10 June, 1994
Reasoning about Action Systems using the B-Method The action system formalism has been succesfully used whenconstructing parallel and distributed systems in a stepwise mannerwithin the refinement calculus. Usually the derivation is carried outmanually. In order to be able to produce more trustworthy software,some mechanical tool is needed. In this paper we show how actionsystems can be derived and refined within the B-Toolkit, which is amechanical tool supporting a software development method, theB-Method. We describe how action systems are embedded in theB-Method. Furthermore, we show how a typical and nontrivialrefinement rule, the superposition refinement rule, is formalized andapplied on action systems within the B-Method. In addition toproviding tool support for action system refinement we also extendthe application area of the B-Method to cover parallel anddistributed systems. A derivation towards a distributed loadbalancing algorithm is given as a case study.
Analyzing User Requirements by Use Cases: A Goal-Driven Approach The purpose of requirements engineering is to elicit and evaluate necessary and valuable user needs. Current use-case approaches to requirements acquisition inadequately support use-case formalization and nonfunctional requirements. Based on industry trends and research, the authors have developed a method to structure use-case models with goals. They use a simple meeting planner system to illustrate the benefits of this new approach
A framework for analyzing and testing requirements with actors in conceptual graphs Software has become an integral part of many people's lives, whether knowingly or not. One key to producing quality software in time and within budget is to efficiently elicit consistent requirements. One way to do this is to use conceptual graphs. Requirements inconsistencies, if caught early enough, can prevent one part of a team from creating unnecessary design, code and tests that would be thrown out when the inconsistency was finally found. Testing requirements for consistency early and automatically is a key to a project being within budget. This paper will share an experience with a mature software project that involved translating software requirements specification into a conceptual graph and recommends several actors that could be created to automate a requirements consistency graph.
Algorithmic and enumerative aspects of the Moser-Tardos distribution Moser & Tardos have developed a powerful algorithmic approach (henceforth "MT") to the Lovász Local Lemma (LLL); the basic operation done in MT and its variants is a search for "bad" events in a current configuration. In the initial stage of MT, the variables are set independently. We examine the distributions on these variables which arise during intermediate stages of MT. We show that these configurations have a more or less "random" form, building further on the "MT-distribution" concept of Haeupler et al. in understanding the (intermediate and) output distribution of MT. This has a variety of algorithmic applications; the most important is that bad events can be found relatively quickly, improving upon MT across the complexity spectrum: it makes some polynomial-time algorithms sub-linear (e.g., for Latin transversals, which are of basic combinatorial interest), gives lower-degree polynomial run-times in some settings, transforms certain super-polynomial-time algorithms into polynomial-time ones, and leads to Las Vegas algorithms for some coloring problems for which only Monte Carlo algorithms were known. We show that in certain conditions when the LLL condition is violated, a variant of the MT algorithm can still produce a distribution which avoids most of the bad events. We show in some cases this MT variant can run faster than the original MT algorithm itself, and develop the first-known criterion for the case of the asymmetric LLL. This can be used to find partial Latin transversals -- improving upon earlier bounds of Stein (1975) -- among other applications. We furthermore give applications in enumeration, showing that most applications (where we aim for all or most of the bad events to be avoided) have many more solutions than known before by proving that the MT-distribution has "large" Rényi entropy and hence that its support-size is large.
1.011765
0.011765
0.011765
0.011765
0.005882
0.00098
0
0
0
0
0
0
0
0
Verifying and validating software requirements and design specifications These recommendation provide a good starting point for identifying and resolving software problems early in life cycle when they're s relatively easy to handle
Context constraints for compositional reachability analysis Behavior analysis of complex distributed systems has led to the search for enhanced reachability analysis techniques which support modularity and which control the state explosion problem. While modularity has been achieved, state explosion in still a problem. Indeed, this problem may even be exacerbated, as a locally minimized subsystem may contain many states and transitions forbidden by its environment or context. Context constraints, specified as interface processes, are restrictions imposed by the environment on subsystem behavior. Recent research has suggested that the state explosion problem can be effectively controlled if context constraints are incorporated in compositional reachability analysis (CRA). Although theoretically very promising, the approach has rarely been used in practice because it generally requires a more complex computational model and does not contain a mechanism to derive context constraints automatically. This article presents a technique to automate the approach while using a similar computational model to that of CRA. Context constraints are derived automatically, based on a set of sufficient conditions for these constraints to be transparently included when building reachability graphs. As a result, the global reachability graph generated using the derived constraints is shown to be observationally equivalent to that generated by CRA without the inclusion of context constraints. Constraints can also be specified explicitly by users, based on their application knowledge. Erroneous constraints which contravene transparency can be identified together with an indication of the error sources. User-specified constraints can be combined with those generated automatically. The technique is illustrated using a clients/server system and other examples.
Requirements Engineering: An Integrated View of Representation, Process, and Domain Reuse, system integration, and interoperability create a growing need for capturing, representing, and using application-level information about software-intensive systems and their evolution. In ESPRIT Basic Research Project NATURE, we are developing an integrative approach to requirements management based on a three- dimensional framework which addresses formalism as well as cognitive and social aspects. This leads to a new requirements process model which integrates human freedoms through allowing relatively free decisions in given situations. Classes of situations and decisions are defined with respect to the three-dimensional framework through the integration of informal and formal representations, theories of domain modeling, and the explicit consideration of nonfunctional requirements in teamwork. Technical support is provided by a conceptual modeling environment with knowledge acquisition through interactive as well as reverse modeling, and with similarity-based querying.
Applying synthesis principles to create responsive software systems The engineering of new software systems is a process of iterative refinement. Each refinement step involves understanding the problem, creating the proposed solution, describing or representing it, and assessing its viability. The assessment includes evaluating its correctness, its feasibility, and its preferability (when there are alternatives). Many factors affect preferability, such as maintainability, responsiveness, reliability, usability, etc. This discussion focuses on only one, the {\em responsiveness} of the software: that is, the response time or throughput as seen by the users. The understanding, creation, representation, and assessment steps are repeated until the proposed product of the refinement step ``passes'''' the assessment.
An Interval Logic for Real-Time System Specification Formal techniques for the specification of real-time systems must be capable of describing system behavior as a set of relationships expressing the temporal constraints among events and actions, including properties of invariance, precedence, periodicity, liveness, and safety conditions. This paper describes a Temporal-Interval Logic with Compositional Operators (TILCO) designed expressly for the specification of real-time systems. TILCO is a generalization of classical temporal logics based on the operators eventually and henceforth; it allows both qualitative and quantitative specification of time relationships. TILCO is based on time intervals and can concisely express temporal constraints with time bounds, such as those needed to specify real-time systems. This approach can be used to verify the completeness and consistency of specifications, as well as to validate system behavior against its requirements and general properties. TILCO has been formalized by using the theorem prover Isabelle/HOL. TILCO specifications satisfying certain properties are executable by using a modified version of the Tableaux algorithm. This paper defines TILCO and its axiomatization, highlights the tools available for proving properties of specifications and for their execution, and provides an example of system specification and validation.
Domain-Specific Automatic Programming Domain knowledge is crucial to an automatic programming system and the interaction between domain knowledge and programming at the current time. The NIX project at Schlumberger-Doll Research has been investigating this issue in the context of two application domains related to oil well logging. Based on these experiments we have developed a framework for domain-specific automatic programming. Within the framework, programming is modeled in terms of two activities, formalization and implementation, each of which transforms descriptions of the program as it proceeds through intermediate states of development. The activities and transformations may be used to characterize the interaction of programming knowledge and domain knowledge in an automatic programming system.
PROTOB - A Hierarchical Object-Oriented CASE Tool for Distributed Systems This paper presents PROTOB, an object-oriented CASE system based on high level Petri nets called PROT nets. It consists of several tools supporting specification, modelling and prototyping activities within the framework of the operational software life cycle paradigm. As its major application area it addresses distributed systems, such as real-time embedded systems, communication protocols and manufacturing control systems. The paper illustrates a case study involving the design of a distributed file system.
Toward reference models for requirements traceability Requirements traceability is intended to ensure continued alignment between stakeholder requirements and various outputs of the system development process. To be useful, traces must be organized according to some modeling framework. Indeed, several such frameworks have been proposed, mostly based on theoretical considerations or analysis of other literature. This paper, in contrast, follows an empirical approach. Focus groups and interviews conducted in 26 major software development organizations demonstrate a wide range of traceability practices with distinct low-end and high-end users of traceability. From these observations, reference models comprising the most important kinds of traceability links for various development tasks have been synthesized. The resulting models have been validated in case studies and are incorporated in a number of traceability tools. A detailed case study on the use of the models is presented. Four kinds of traceability link types are identified and critical issues that must be resolved for implementing each type and potential solutions are discussed. Implications for the design of next-generation traceability methods and tools are discussed and illustrated.
On Overview of KRL, a Knowledge Representation Language
Elements of style: analyzing a software design feature with a counterexample detector We illustrate the application of Nitpick, a specification checker, to the design of a style mechanism for a word processor. The design is cast, along with some expected properties, in a subset of Z. Nitpick checks a property by enumerating all possible cases within some finite bounds, displaying as a counterexample the first case for which the property fails to hold. Unlike animation or execution tools, Nitpick does not require state transitions to be expressed constructively, and unlike theorem provers, operates completely automatically without user intervention. Using a variety of reduction mechanisms, it can cover an enormous number of cases in a reasonable time, so that subtle flaws can be rapidly detected.
Communication Port: A Language Concept for Concurrent Programming A new language concept-communication port (CP), is introduced for programming on distributed processor networks. Such a network can contain an arbitrary number of processors each with its own private storage but with no memory sharing. The processors must communicate via explicit message passing. Communication port is an encapsulation of two language properties: "communication non-determinism" and "communication disconnect time." It provides a tool for progranmers to write well-structured, modular, and efficient concurrent programs. A number of examples are given in the paper to demonstrate the power of the new concepts.
A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Motivation: When running experiments that involve multiple high density oligonucleotide arrays, it is important to remove sources of variation between arrays of non-biological origin. Normalization is a process for reducing this variation. It is common to see non-linear relations between arrays and the standard normalization provided by Affymetrix does not perform well in these situations. Results: We present three methods of performing normalization at the probe intensity level. These methods are called complete data methods because they make use of data from all arrays in an experiment to form the normalizing relation. These algorithms are compared to two methods that make use of a baseline array: a one number scaling based algorithm and a method that uses a non-linear normalizing relation by comparing the variability and bias of an expression measure. Two publicly available datasets are used to carry out the comparisons. The simplest and quickest complete data method is found to perform favorably. Availablity: Software implementing all three of the complete data normalization methods is available as part of the R package Affy, which is a part of the Bioconductor project http://www.bioconductor.org. Contact: [email protected] Supplementary information: Additional figures may be found at http://www.stat.berkeley.edu/similar tobolstad/normalize/ index.html.
A Refinement Theory that Supports Reasoning About Knowledge and Time An expressive semantic framework for program refinement that supports both temporal reasoning and reasoning about the knowledge of multiple agents is developed. The refinement calculus owes the cleanliness of its decomposition rules for all programming language constructs and the relative simplicity of its semantic model to a rigid synchrony assumption which requires all agents and the environment to proceed in lockstep. The new features of the calculus are illustrated in a derivation of the two-phase-commit protocol.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.008259
0.010256
0.008483
0.0064
0.005129
0.003974
0.002133
0.001149
0.00014
0.000034
0.000005
0
0
0
One VM to rule them all Building high-performance virtual machines is a complex and expensive undertaking; many popular languages still have low-performance implementations. We describe a new approach to virtual machine (VM) construction that amortizes much of the effort in initial construction by allowing new languages to be implemented with modest additional effort. The approach relies on abstract syntax tree (AST) interpretation where a node can rewrite itself to a more specialized or more general node, together with an optimizing compiler that exploits the structure of the interpreter. The compiler uses speculative assumptions and deoptimization in order to produce efficient machine code. Our initial experience suggests that high performance is attainable while preserving a modular and layered architecture, and that new high-performance language implementations can be obtained by writing little more than a stylized interpreter.
Delegation proxies: the power of propagation Scoping behavioral variations to dynamic extents is useful to support non-functional requirements that otherwise result in cross-cutting code. Unfortunately, such variations are difficult to achieve with traditional reflection or aspects. We show that with a modification of dynamic proxies, called delegation proxies, it becomes possible to reflectively implement variations that propagate to all objects accessed in the dynamic extent of a message send. We demonstrate our approach with examples of variations scoped to dynamic extents that help simplify code related to safety, reliability, and monitoring.
The impact of meta-tracing on VM design and implementation. Most modern languages are implemented using Virtual Machines (VMs). While the best VMs use Just-In-Time (JIT) compilers to achieve good performance, JITs are costly to implement, and few VMs therefore come with one. The RPython language allows tracing JIT VMs to be automatically created from an interpreter, changing the economics of VM implementation. In this paper, we explain, through two concrete VMs, how meta-tracing RPython VMs can be designed and optimised, and, experimentally, the performance levels one might reasonably expect from them.
It's alive! continuous feedback in UI programming Live programming allows programmers to edit the code of a running program and immediately see the effect of the code changes. This tightening of the traditional edit-compile-run cycle reduces the cognitive gap between program code and execution, improving the learning experience of beginning programmers while boosting the productivity of seasoned ones. Unfortunately, live programming is difficult to realize in practice as imperative languages lack well-defined abstraction boundaries that make live programming responsive or its feedback comprehensible. This paper enables live programming for user interface programming by cleanly separating the rendering and non-rendering aspects of a UI program, allowing the display to be refreshed on a code change without restarting the program. A type and effect system formalizes this separation and provides an evaluation model that incorporates the code update step. By putting live programming on a more formal footing, we hope to enable critical and technical discussion of live programming systems.
Zero-overhead metaprogramming: reflection and metaobject protocols fast and without compromises Runtime metaprogramming enables many useful applications and is often a convenient solution to solve problems in a generic way, which makes it widely used in frameworks, middleware, and domain-specific languages. However, powerful metaobject protocols are rarely supported and even common concepts such as reflective method invocation or dynamic proxies are not optimized. Solutions proposed in literature either restrict the metaprogramming capabilities or require application or library developers to apply performance improving techniques. For overhead-free runtime metaprogramming, we demonstrate that dispatch chains, a generalized form of polymorphic inline caches common to self-optimizing interpreters, are a simple optimization at the language-implementation level. Our evaluation with self-optimizing interpreters shows that unrestricted metaobject protocols can be realized for the first time without runtime overhead, and that this optimization is applicable for just-in-time compilation of interpreters based on meta-tracing as well as partial evaluation. In this context, we also demonstrate that optimizing common reflective operations can lead to significant performance improvements for existing applications.
Fine-grained modularity and reuse of virtual machine components Modularity is a key concept for large and complex applications and an important enabler for collaborative research. In comparison, virtual machines (VMs) are still mostly monolithic pieces of software. Our goal is to significantly reduce to the cost of extending VMs to efficiently host and execute multiple, dynamic languages. We are designing and implementing a VM following the \"everything is extensible\" paradigm. Among the novel use cases that will be enabled by our research are: VM extensions by third parties, support for multiple languages inside one VM, and a universal VM for mobile devices. Our research will be based on the existing state of the art. We will reuse an existing metacircular Java VM and an existing dynamic language VM implemented in Java. We will split the VMs into fine-grained modules, define explicit interfaces and extension points for the modules, and finally re-connect them. Performance is one of the most important concerns for VMs. Modularity improves flexibility but can introduce an unacceptable performance overhead at the module boundaries, e.g., for inter-module method calls. We will identify this overhead and address it with novel feedback-directed compiler optimizations. These optimizations will also improve the performance of modular applications running on top of our VM. The expected results of our research will be not only new insights and a new design approach for VMs, but also a complete reference implementation of a modular VM where everything is extensible by third parties and that supports multiple languages.
Stepwise refinement of parallel algorithms The refinement calculus and the action system formalism are combined to provide a uniform method for constructing parallel and distributed algorithms by stepwise refinement. It is shown that the sequencial refinement calculus can be used as such for most of the derivation steps. Parallelism is introduced during the derivation by refinement of atomicity. The approach is applied to the derivation of a parallel version of the Gaussian elimination method for solving simultaneous linear equation systems.
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
Compact chart: a program logic notation with high describability and understandability This paper describes an improved flow chart notation, Compact Chart, developed because the flow chart conception is effective in constructing program logics, but the conventional notation for it is ineffective.By introducing the idea of separation of control transfer and process description, Compact Charting gives an improved method of representing and understanding program logics.
Powerful Techniques for the Automatic Generation of Invariants . When proving invariance properties of programs one is facedwith two problems. The first problem is related to the necessity of provingtautologies of the considered assertion language, whereas the secondmanifests in the need of finding sufficiently strong invariants. This paperfocuses on the second problem and describes techniques for the automaticgeneration of invariants. The first set of these techniques is applicable onsequential transition systems and allows to derive so-called local ...
Software development: two approaches to animation of Z specifications using Prolog Formal methods rely on the correctness of the formal requirements specification, but this correctness cannot be proved. This paper discusses the use of software tools to assist in the validation of formal specifications and advocates a system by which Z specifications may be animated as Prolog programs. Two Z/Prolog translation strategies are explored; formal program synthesis and structure simulation. The paper explains why the former proved to be unsuccessful and describes the techniques developed for implementing the latter approach, with the aid of case studies
Navigating hierarchically clustered networks through fisheye and full-zoom methods Many information structures are represented as two-dimensional networks (connected graphs) of links and nodes. Because these network tend to be large and quite complex, people often perfer to view part or all of the network at varying levels of detail. Hierarchical clustering provides a framework for viewing the network at different levels of detail by superimposing a hierarchy on it. Nodes are grouped into clusters, and clusters are themselves place into other clusters. Users can then navigate these clusters until an appropiate level of detail is reached. This article describes an experiment comparing two methods for viewing hierarchically clustered networks. Traditional full-zoom techniques provide details of only the current level of the hierarchy. In contrast, fisheye views, generated by the “variable-zoom” algorithm described in this article, provide information about higher levels as well. Subjects using both viewing methods were given problem-solving tasks requiring them to navigate a network, in this case, a simulated telephone system, and to reroute links in it. Results suggest that the greater context provided by fisheye views significantly improved user performance. Users were quicker to complete their task and made fewer unnecessary navigational steps through the hierarchy. This validation of fisheye views in important for designers of interfaces to complicated monitoring systems, such as control rooms for supervisory control and data acquistion systems, where efficient human performance is often critical. However, control room operators remained concerned about the size and visibility tradeoffs between the fine room operators remained concerned about the size and visibility tradeoffs between the fine detail provided by full-zoom techniques and the global context supplied by fisheye views. Specific interface feaures are required to reconcile the differences.
Notes on Nonrepetitive Graph Colouring. A vertex colouring of a graph is nonrepetitive on paths if there is no path upsilon(1), upsilon(2),...., upsilon(2t) such that upsilon(i) and upsilon(t+i) receive the same colour for all i = 1, 2,..., t. We determine the maximum density of a graph that admits a k-colouring that is nonrepetitive on paths. We prove that every graph has a subdivision that admits a 4-colouring that is nonrepetitive on paths. The best previous bound was 5. We also study colourings that are nonrepetitive on walks, and provide a conjecture that would imply that every graph with maximum degree Delta has a f (Delta)-colouring that is nonrepetitive on walks. We prove that every graph with treewidth k and maximum degree Delta has a O(k Delta)-colouring that is nonrepetitive on paths, and a O(k Delta(3))-colouring that is nonrepetitive on walks.
Cognitive Relaying With Transceiver Hardware Impairments Under Interference Constraints. In this letter, we analyze the performance of cognitive amplify-and-forward multirelay networks with active direct link in the presence of relay transceiver hardware impairments. Considering distortion noises on both interference and main data links, we derive tight closed-form outage probability expressions and their asymptotic behavior for partial relay selection (PRS) and opportunistic relay se...
1.042869
0.04475
0.044595
0.044345
0.043167
0.023405
0.000001
0
0
0
0
0
0
0
Knowledge Management: Insights from the Trenches In 1999, the Software Productivity Consortium-a not-for-profit alliance of industry, government, and academia-asked our members to indicate which technological advances they need most urgently. Most respondents stressed the need to better leverage an increasingly vast and complex array of intellectual assets. Such assets represent today's new capital, marking a profound shift from more traditional types of capital. To address this urgent need, the Consortium launched a knowledge management (KM) program to develop our competency in knowledge management so that we can better serve our members and to provide products and services that will help members develop their own KM competencies. We focused first on making access to Consortium assets easier through an enterprise portal. Then, to address the larger KM issues, we also partnered with George Washington University and its new Institute for Knowledge Management, which seeks to establish a sound theoretical foundation for KM. Here, we recap the lessons we have learned in pursuing our KM mandate and set forth what we believe are the keys to KM's future success
Verification and validation of knowledge-based systems Knowledge-based systems (KBSs) are being used in many applications areas where their failures can be costly because of losses in services, property or even life. To ensure their reliability and dependability, it is therefore important that these systems are verified and validated before they are deployed. This paper provides perspectives on issues and problems that impact the verification and validation (V&V) of KBSs. Some of the reasons why V&V of KBSs is difficult are presented. The paper also provides an overview of different techniques and tools that have been developed for performing V&V activities. Finally, some of the research issues that are relevant for future work in this field are discussed
Knowledge management and the dynamic nature of knowledge Knowledge management (KM) or knowledge sharing in organizations is based on an understanding of knowledge creation and knowledge transfer. In implementation, KM is an effort to benefit from the knowledge that resides in an organization by using it to achieve the organization's mission. The transfer of tacit or implicit knowledge to explicit and accessible formats, the goal of many KM projects, is challenging, controversial, and endowed with ongoing management issues. This article argues that effective knowledge management in many disciplinary contexts must be based on understanding the dynamic nature of knowledge itself. The article critiques some current thinking in the KM literature and concludes with a view towards knowledge management programs built around knowledge as a dynamic process.
The Semantics of Semantic Annotation Semantic metadata will playa significant role in the provision of the Semantic Web. Agents will need metadata that describes the content of resources in order to perform operations, such as retrieval, over those resources. In addition, if rich semantic metadata is supplied, those agents can then employ reasoning over the metadata, enhancing their processing power. Key to this approach is the provision of annotations, both through automatic and human means. The semantics of these annotations, however, in terms of the mechanisms through which they are interpreted and presented to the user, are sometimes unclear. In this paper, we identifya number of candidate interpretations of annotation, and discuss the impact these interpretations mayha ve on Semantic Web applications.
Generating domain-specific methodical knowledge for requirement analysis based on methodology ontology A methodology ontology is proposed to help requirement analysis. It helps the software engineer to generate methodical knowledge for requirement analysis (RA). It includes a method library and a set of modeling support entities. The method library contains method templates and components, e.g., primitive methods, inference functions, inference structures, domain models, and a domain ontology. The modeling support entities use the library to construct the desired methodical knowledge. An example is given showing how this approach constructs an object-oriented RA method. Thus generated methodical knowledge can be further coupled with domain knowledge to form a domain-specific RA tool, which relieves the software engineer of selecting and applying a distinct RA tool for a domain. This approach allows ready enhancement, since it is designed as an open architecture, which helps us to evolve it to assimilate new software methodology.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
OIL: An Ontology Infrastructure for the Semantic Web Currently, computers are changing from single isolated devices to entry points into a worldwide network of information exchange and business transactions. Support in the exchange of data, information, and knowledge is becoming the key issue in computer technology today. Ontologies provide a shared and common understanding of a domain that can be communicated between people and across application systems. Ontologies will play a major role in supporting information exchange processes in various areas. A prerequisite for such a role is the development of a joint standard for specifying and exchanging ontologies well integrated with existing Web standards. This article deals with precisely this necessity. The authors present OIL, a proposal for such a standard enabling the semantic Web. It is based on existing proposals such as OKBC, XOL, and RDFS and enriches them with necessary features for expressing rich ontologies. The article presents the motivation, underlying rationale, modeling primitives, syntax, semantics, tool environment, and applications of OIL.
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
Principles of good software specification and their implications for specification languages Careful consideration of the primary uses of software specifications leads directly to three criteria for judging specifications, which can then be used to develop eight design principles for "good" specifications. These principles, in turn, result in eighteen implications for specification languages that strongly constrain the set of adequate specification languages and identify the need for several novel capabilities such as historical and future references, elimination of variables, and result specification.
Knowledge Visualization from Conceptual Structures This paper addresses the problem of automatically generating displays from conceptual graphs for visualization of the knowledge contained in them. Automatic display generation is important in validating the graphs and for communicating the knowledge they contain. Displays may be classified as literal, schematic, or pictorial, and also as static versus dynamic. At this time prototype software has been developed to generate static schematic displays of graphs representing knowledge of digital systems. The prototype software generates displays in two steps, by first joining basis displays associated with basis graphs from which the graph to be displayed is synthesized, and then assigning screen coordinates to the display elements. Other strategies for mapping conceptual graphs to schematic displays are also discussed. Keywords Visualization, Representation Mapping, Conceptual Graphs, Schematic Diagrams, Pictures
Algebraic tools for the performance evaluation of discrete event systems In this paper, it is shown that a certain class of Petri nets called event graphs can be represented as linear "time-invariant" flnite-dimensional sys- tems using some particular algebras. This sets the ground on which a theory of these systems can be developped in a manner which is very analogous to that of conventional linear system theory. Part 2 of the paper is devoted to showing some preliminary basic developments in that direction. Indeed, there are several ways in which one can consider event graphs as linear sys- tems: these ways correspond to approaches in the time domain, in the event domain and in a two-dimensional domain. In each of these approaches, a difierent algebra has to be used for models to remain linear. However, the common feature of these algebras is that they all fall into the axiomatic deflnition of "dioids". Therefore, Part 1 of the paper is devoted to a unifled presentation of basic algebraic results on dioids.
Statechartable Petri nets. Petri nets and statecharts can model concurrent systems in a succinct way. While translations from statecharts to Petri nets exist, a well-defined translation from Petri nets to statecharts is lacking. Such a translation should map an input net to a corresponding statechart, having a structure and behaviour similar to that of the input net. Since statecharts can only model a restricted form of concurrency, not every Petri net has a corresponding statechart. We identify a class of Petri nets, called statechartable nets, that can be translated to corresponding statecharts. Statechartable Petri nets are structurally defined using the novel notion of an area. We also define a structural translation that maps each statechartable Petri net to a corresponding statechart. The translation is proven sound and complete for statechartable Petri nets.
On ternary square-free circular words Circular words are cyclically ordered finite sequences of letters. We give a computer-free proof of the following result by Currie: square-free circular words over the ternary alphabet exist for all lengths l except for 5, 7, 9, 10, 14, and 17. Our proof reveals an interesting connection between ternary square-free circular words and closed walks in the K(3,3) graph. In addition, our proof implies an exponential lower bound on the number of such circular words of length l and allows one to list all lengths l for which such a circular word is unique up to isomorphism.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.100286
0.100571
0.100571
0.100571
0.100571
0.050286
0.016801
0.000007
0
0
0
0
0
0
Concurrent Processes and Their Syntax
Communication Port: A Language Concept for Concurrent Programming A new language concept-communication port (CP), is introduced for programming on distributed processor networks. Such a network can contain an arbitrary number of processors each with its own private storage but with no memory sharing. The processors must communicate via explicit message passing. Communication port is an encapsulation of two language properties: "communication non-determinism" and "communication disconnect time." It provides a tool for progranmers to write well-structured, modular, and efficient concurrent programs. A number of examples are given in the paper to demonstrate the power of the new concepts.
An indeterminate constructor for applicative programming This paper proposes the encapsulization and control of contending parallel processes within data structures. The advantage of embedding the contention within data is that the contention, itself, thereby becomes an object which can be handled by the program at a level above the actions of the processes themselves. This means that an indeterminate behavior, never precisely specified by the programmer or by the input, may be shared in the same way that an argument to a function is shared by every use of the corresponding parameter, an ability which is of particular importance to applicative-style programming.
Aspects of applicative programming for file systems (Preliminary Version) This paper develops the implications of recent results in semantics for applicative programming. Applying suspended evaluation (call-by-need) to the arguments of file construction functions results in an implicit synchronization of computation and output. The programmer need not participate in the determination of the pace and the extent of the evaluation of his program. Problems concerning multiple input and multiple output files are considered: typical behavior is illustrated with an example of a rudimentary text editor written applicatively. As shown in the trace of this program, the driver of the program is the sequential output device(s). Implications of applicative languages for I/O bound operating systems are briefly considered.
An operational requirement description model for open systems Requirement engineering has been successfully applied to many superficial problems, but there has been little evidence of transfer to complex system construction. In this paper we present a new conceptual model which is for incomplete requirement descriptions. The model is especially designed for supporting the requirement specification and the analysis of open systems. An analysis of existing models and languages shows the main problem in requirement engineering: the harmony between a well-defined basic model and a convenient language. The new model REMOS combines the complex requirements of open systems with the basic characteristics of transaction-oriented systems. Transaction-oriented systems are fault-tolerant and offer security and privacy mechanisms. Such systems provide such excellent properties - why don't we already profit from it during requirement specification? The model REMOS and the applicative language RELOS take advantage of transaction properties. The idea REMOS is based on is the definition of scenarios and communicating subsystems. REMOS guides the users to making their requirements more clear, and RELOS offers a medium for requirement definition.
Software requirements: Are they really a problem? Do requirements arise naturally from an obvious need, or do they come about only through diligent effort—and even then contain problems? Data on two very different types of software requirements were analyzed to determine what kinds of problems occur and whether these problems are important. The results are dramatic: software requirements are important, and their problems are surprisingly similar across projects. New software engineering techniques are clearly needed to improve both the development and statement of requirements.
Specification Diagrams for Actor Systems Traditional approaches to specifying distributed systems include temporal logic specification (e.g. TLA), and process algebra specification (e.g. LOTOS). We propose here a new form of graphical notation for specifying open distributed object systems. The primary design goal is to make a form of notation for defining message-passing behavior that is expressive, intuitively understandable, and that has a formal underlying semantics. We describe the language and its use through presentation of a series of example specifications. We also give an operationally-based interaction path semantics for specification diagrams.
A compositional approach to superimposition A general definition of the notion of superimposition is presented. We show that previous constructions under the same name can be seen as special cases of our definition. We consider several properties of superimposition definable in our terms, notably the nonfreezing property. We also consider a syntactic representation of our construct in CSP
Integrating noninterfering versions of programs The need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. To date, the only available tools for assisting with program integration are variants of text-based differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated.This paper concerns the design of a semantics-based tool for automatically integrating program versions. The main contribution of the paper is an algorithm that takes as input three programs A, B, and Base, where A and B are two variants of Base. Whenever the changes made to Base to create A and B do not “interfere” (in a sense defined in the paper), the algorithm produces a program M that integrates A and B. The algorithm is predicated on the assumption that differences in the behavior of the variant programs from that of Base, rather than differences in the text, are significant and must be preserved in M. Although it is undecidable whether a program modification actually leads to such a difference, it is possible to determine a safe approximation by comparing each of the variants with Base. To determine this information, the integration algorithm employs a program representation that is similar (although not identical) to the dependence graphs that have been used previously in vectorizing and parallelizing compilers. The algorithm also makes use of the notion of a program slice to find just those statements of a program that determine the values of potentially affected variables.The program-integration problem has not been formalized previously. It should be noted, however, that the integration problem examined here is a greatly simplified one; in particular, we assume that expressions contain only scalar variables and constants, and that the only statements used in programs are assignment statements, conditional statements, and while-loops.
A simple approach to specifying concurrent systems Over the past few years, I have developed an approach to the formal specification of concurrent systems that I now call the transition axiom method. The basic formalism has already been described in [12] and [1], but the formal details tend to obscure the important concepts. Here, I attempt to explain these concepts without discussing the details of the underlying formalism.Concurrent systems are not easy to specify. Even a simple system can be subtle, and it is often hard to find the appropriate abstractions that make it understandable. Specifying a complex system is a formidable engineering task. We can understand complex structures only if they are composed of simple parts, so a method for specifying complex systems must have a simple conceptual basis. I will try to demonstrate that the transition axiom method provides such a basis. However, I will not address the engineering problems associated with specifying real systems. Instead, the concepts will be illustrated with a series of toy examples that are not meant to be taken seriously as real specifications.Are you proposing a specification language? No. The transition axiom method provides a conceptual and logical foundation for writing formal specifications; it is not a specification language. The method determines what a specification must say; a language determines in detail how it is said.What do you mean by a formal specification? I find it helpful to view a specification as a contract between the user of a system and its implementer. The contract should tell the user everything he must know to use the system, and it should tell the implementer everything he must know about the system to implement it. In principle, once this contract has been agreed upon, the user and the implementer have no need for further communication. (This view describes the function of the specification; it is not meant as a paradigm for how systems should be built.)For a specification to be formal, the question of whether an implementation satisfies the specification must be reducible to the question of whether an assertion is provable in some mathematical system. To demonstrate that he has met the terms of the contract, the implementer should resort to logic rather than contract law. This does not mean that an implementation must be accompanied by a mathematical proof. It does mean that it should be possible, in principle though not necessarily in practice, to provide such a proof for a correct implementation. The existence of a formal basis for the specification method is the only way I know to guarantee that specifications are unambiguous.Ultimately, the systems we specify are physical objects, and mathematics cannot prove physical properties. We can prove properties only of a mathematical model of the system; whether or not the system correctly implements the model must remain a question of law and not of mathematics.Just what is a system? By "system," I mean anything that interacts with its environment in a discrete (digital) fashion across a well-defined boundary. An airline reservation system is such a system, where the boundary might be drawn between the agents using the system, who are part of the environment, and the terminals, which are part of the system. A Pascal procedure is a system whose environment is the rest of the program, with which it interacts by responding to procedure calls and accessing global variables. Thus, the system being specified may be just one component of a larger system.A real system has many properties, ranging from its response time to the color of the cabinet. No formal method can specify all of these properties. Which ones can be specified with the transition axiom method? The transition axiom method specifies the behavior of a System—that is, the sequence of observable actions it performs when interacting with the environment. More precisely, it specifies two classes of behavioral properties: safety and liveness properties. Safety properties assert what the system is allowed to do, or equivalently, what it may not do. Partial correctness is an example of a safety property, asserting that a program may not generate an incorrect answer. Liveness properties assert what the system must do. Termination is an example of a liveness property, asserting that a program must eventually generate an answer. (Alpern and Schneider [2] have formally defined these two classes of properties.) In the transition axiom method, safety and liveness properties are specified separately.There are important behavioral properties that cannot be specified by the transition axiom method; these include average response time and probability of failure. A transition axiom specification can provide a formal model with which to analyze such properties,1 but it cannot formally specify them.There are also important nonbehavioral properties of systems that one might want to specify, such as storage requirements and the color of the cabinet. These lie completely outside the realm of the method.Why specify safety and liveness properties separately? There is a single formalism that underlies a transition axiom specification, so there is no formal separation between the specification of safety and liveness properties. However, experience indicates that different methods are used to reason about the two kinds of properties and it is convenient in practice to separate them. I consider the ability to decompose a specification into liveness and safety properties to be one of the advantages of the method. (One must prove safety properties in order to verify liveness properties, but this is a process of decomposing the proof into smaller lemmas.)Can the method specify real-time behavior? Worst-case behavior can be specified, since the requirement that the system must respond within a certain length of time can be expressed as a safety property—namely, that the clock is not allowed to reach a certain value without the system having responded. Average response time cannot be expressed as a safety or liveness property.The transition axiom method can assert that some action either must occur (liveness) or must not occur (safety). Can it also assert that it is possible for the action to occur? No. A specification serves as a contractual constraint on the behavior of the system. An assertion that the system may or may not do something provides no constraint and therefore serves no function as part of the formal specification. Specification methods that include such assertions generally use them as poor substitutes for liveness properties. Some methods cannot specify that a certain input must result in a certain response, specifying instead that it is possible for the input to be followed by the response. Every specification I have encountered that used such assertions was improved by replacing the possibility assertions with liveness properties that more accurately expressed the system's informal requirements.Imprecise wording can make it appear that a specification contains a possibility assertion when it really doesn't. For example, one sometimes states that it must be possible for a transmission line to lose messages. However, the specification does not require that the loss of messages be possible, since this would prohibit an implementation that guaranteed no messages were lost. The specification might require that something happens (a liveness property) or doesn't happen (a safety property) despite the loss of messages. Or, the statement that messages may be lost might simply be a comment about the specification, observing that it does not require that all messages be delivered, and not part of the actual specification.If a safety property asserts that some action cannot happen, doesn't its negation assert that the action is possible? In a formal system, one must distinguish the logical formula A from the assertion &vdash; A, which means that A is provable in the logic; &vdash; A is not a formula of the logic itself. In the logic underlying the transition axiom method, if A represents a safety property asserting that some action is impossible, then the negation of A, which is the formula &boxdl;A, asserts that the action must occur. The action's possibility is expressed by the negation of &vdash; A, which is a metaformula and not a formula within the logic. See [10] for more details.
A rule language to capture and model business policy specifications The TEMPORA paradigm for the development of large data intensive, transaction oriented information systems explicitly recognises the role of organisational policy within an information system, and visibly maintains this policy throughout the software development process, from requirements specifications through to an executable implementation.
Multilevel Visualization of Clustered Graphs Clustered graphs are graphs with recursive clustering structures over the vertices. This type of structure appears in many systems. Examples include CASE tools, management information systems, VLSI design tools, and reverse engineering systems. Existing layout algorithms represent the clustering structure as recursively nested regions in the plane. However, as the structure becomes more and more complex, two dimensional plane representations tend to be insufficient. In this paper, firstly, we describe some two dimensional plane drawing algorithms for clustered graphs; then we show how to extend two dimensional plane drawings to three dimensional multilevel drawings. We consider two conventions: straight-line convex drawings and orthogonal rectangular drawings; and we show some examples.
Linda-based applicative and imperative process algebras The classical algebraic approach to the specification and verification of concurrent systems is tuned to distributed programs that rely on asynchronous communications and permit explicit data exchange. An applicative process algebra, obtained by embedding the Linda primitives for interprocess communication in a CCS/CSP-like language, and an imperative one, obtained from the applicative variant by adding a construct for explicit assignment of values to variables, are introduced. The testing framework is used to define behavioural equivalences for both languages and sound and complete proof systems for them are described together with a fully abstract denotational model (namely, a variant of Strong Acceptance Trees).
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.044334
0.020056
0.020056
0.010028
0.006667
0.001005
0.000108
0.000034
0.000016
0.000005
0.000001
0
0
0
Introductory paper: Reflections on conceptual modelling The objective of this introductory paper is twofold. On one hand, it shows the guest editors's view about the complex held of Conceptual Modelling. To do that, we discuss some concepts related to this topic, as well as the relation that nowadays exists between this process and the other parts of software development.On the other hand, this introductory paper describes the papers that are included in this special issue, connecting each of these papers with the view that the guest editors have about the field. (C) 2000 Elsevier Science B.V. All rights reserved.
Representing Software Engineering Knowledge We argue that one important role that ArtificialIntelligence can play in Software Engineering is to act as a sourceof ideas about representing knowledge that can improve thestate-of-the-art in software information management, rather than justbuilding intelligent computer assistants. Among others, suchtechniques can lead to new approaches for capturing, recording,organizing, and retrieving knowledge about a software system.Moreover, this knowledge can be stored in a software knowledge base,which serves as “corporate memory”, facilitating the work ofdevelopers, maintainers and users alike. We pursue this centraltheme by focusing on requirements engineering knowledge, illustratingit with ideas originally reported in (Greenspan et al., 1982; Borgida et al., 1993; Yu, 1993) and (Chung, 1993b). The first example concerns the language RML,designed on a foundation of ideas from frame- and logic-basedknowledge representation schemes, to offer a novel (at least for itstime) formal requirements modeling language. The second contributionadapts solutions of the frame problem originally proposed in thecontext of AI planning in order to offer a better formulation of thenotion of state change caused by an activity, which appears in mostformal requirements modeling languages. The final contributionimports ideas from multi-agent planning systems to propose a novelontology for capturing organizational intentions in requirementsmodeling. In each case we examine alterations that have been made toknowledge representation ideas in order to adapt them for SoftwareEngineering use.
A conceptual model completely independent of the implementation paradigm Several authors have pointed out that current conceptual models have two main shortcomings. First, they are clearly oriented to a specific development paradigm (structured, objects, etc.). Second, once the conceptual models have been obtained, it is really difficult to switch to another development paradigm, because the model orientation to a specific development approach. This fact induces problems during development, since practitioners are encouraged to think in terms of a solution before the problem at hand is well understood, thus anticipating perhaps bad design decisions.An appropriate analysis task requires models that are independent of any implementation issues. In concrete, models should support developers to understand the problem and its constraints before any solution is identified. This paper proposes such an alternative approach to conceptual modelling, called "problem-oriented analysis method".
Mapping a functional specification to an object-oriented specification in software re-engineering Re-engineering of a software consists of three main phases - reverse engineering the code into an abstraction, modifying the abstraction for better maintenance and future enhancements, and re-implementing the modified abstraction. This paper is a contribution to the second phase of a re-engineering process. It is assumed that the first phase reverses the code and develops a functional abstraction in the Z specification language. This paper describes a methodology to transform the Z specification into an Object-Z specification. Implementation derived from the latter is easier to maintain and to enhance because of the advantages of the object-oriented approach. A second phase of the transformation process describes optimization techniques to improve the Object-Z specification, taking into account the additional features in Object-Z The transformation has been successfully applied to a large case study comparable to industrial-size problems.
From object-oriented to goal-oriented requirements analysis
The Object Flow Model: A Formal Framework for Describing the Dynamic Construction, Destruction and Interaction of Complex Objects This research complements active object-oriented database management systems by providing a formal, yet conceptually-natural model for complex object construction and destruction. The Object Flow Model (OFM), introduced in this paper, assumes an object-oriented database for the rich structural description of objects and for the specification of methods to manipulate objects. The OFM contributes a third component, the Object Flow Diagram (OFD), which provides a visual formalism to describe how multiple objects and events can actively invoke processing steps, how objects can become part of progressively more complex objects, and how complex objects can be picked apart. The OFD thus provides an invocation mechanism that is more general than a single message and a processing mechanism that may invoke multiple methods (so long as they apply to either the input or output objects). The development of the OFD was influenced by conceptual modeling languages and discrete event simulation languages and the formal semantics of the OFD is based on work in deductive databases.
Goal-directed requirements acquisition Requirements analysis includes a preliminary acquisition step where a global model for the specification of the system and its environment is elaborated. This model, called requirements model, involves concepts that are currently not supported by existing formal specification languages, such as goals to be achieved, agents to be assigned, alternatives to be negotiated, etc. The paper presents an approach to requirements acquisition which is driven by such higher-level concepts. Requirements models are acquired as instances of a conceptual meta-model. The latter can be represented as a graph where each node captures an abstraction such as, e.g., goal, action, agent, entity, or event, and where the edges capture semantic links between such abstractions. Well-formedness properties on nodes and links constrain their instances—that is, elements of requirements models. Requirements acquisition processes then correspond to particular ways of traversing the meta-model graph to acquire appropriate instances of the various nodes and links according to such constraints. Acquisition processes are governed by strategies telling which way to follow systematically in that graph; at each node specific tactics can be used to acquire the corresponding instances. The paper describes a significant portion of the meta-model related to system goals, and one particular acquisition strategy where the meta-model is traversed backwards from such goals. The meta-model and the strategy are illustrated by excerpts of a university library system.
A Conceptual Framework for Requirements Engineering. A framework for assessing research and practice in requirements engineering is proposed. The framework is used to survey state of the art research contributions and practice. The framework considers a task activity view of requirements, and elaborates different views of requirements engineering (RE) depending on the starting point of a system development. Another perspective is to analyse RE from different conceptions of products and their properties. RE research is examined within this framework and then placed in the context of how it extends current system development methods and systems analysis techniques.
Requirements-Based Testing of Real-Time Systems. Modeling for Testability First Page of the Article
A new, fast, and efficient image codec based on set partitioning in hierarchical trees Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code
Finding Response-Times In A Real-Time System There are two major performance issues in a real-time system where a processor has a set of devices connected to it at different priority levels. The first is to prove whether, for a given assignment of devices to priority levels, the system can handle its peak processing load without losing any inputs from the devices. The second is to determine the response time for each device. There may be several ways of assigning the devices to priority levels so that the peak processing load is met, but only some (or perhaps none) of these ways will also meet the response-time requirements for the devices. In this paper, we define a condition that must be met to handle the peak processing load and describe how exact worst-case response times can then be found. When the condition cannot be met, we show how the addition of buffers for inputs can be useful. Finally, we discuss the use of multiple processors in systems for real-time applications.
Tools for specifying real-time systems Tools for formally specifying software for real-time systems have strongly improved their capabilities in recent years. At present, tools have the potential for improving software quality as well as engineers' productivity. Many tools have grown out of languages and methodologies proposed in the early 1970s. In this paper, the evolution and the state of the art of tools for real-time software specification is reported, by analyzing their development over the last 20 years. Specification techniques are classified as operational, descriptive or dual if they have both operational and descriptive capabilities. For each technique reviewed three different aspects are analyzed, that is, power of formalism, tool completeness, and low-level characteristics. The analysis is carried out in a comparative manner; a synthetic comparison is presented in the final discussion where the trend of technology improvement is also analyzed.
Cooperating proofs for distributed programs with multiparty interactions The paper presents a proof system for partial-correctness assertions for a language for distributed programs based on multiparty interactions as its interprocess communication and synchronization primitive. The system is a natural generalization of the cooperating proofs introduced for partial-correctness proofs of CSP programs.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.1124
0.037467
0.0248
0.0248
0.008504
0.000181
0.000053
0.000005
0
0
0
0
0
0
Observer-Based Composite Adaptive Type-2 Fuzzy Control for PEMFC Air Supply Systems Polymer electrolyte membrane fuel cell (PEMFC) air supply systems are usually affected negatively by model uncertainties, external disturbance, and unmeasured variables. In this article, we propose a composite adaptive type-2 fuzzy controller based on a high-gain observer and a disturbance observer for oxygen excess ratio (OER) of PEMFC air supply systems. First, the derivatives of system output, which are unavailable due to limited sensors, are estimated via the high-gain observer. Then, interval type-2 fuzzy logic systems (IT2 FLSs) are adopted to approximate the unknown system dynamics and the disturbance observer is designed to estimate compound disturbance including unknown external disturbance and fuzzy approximation error. Finally, in order to improve the tracking performance, two composite adaptive updating laws are constructed by utilizing the estimated tracking error and the modeling error. Theoretical analysis shows that the system tracking error is uniformly ultimately bounded by Lyapunov stability theory. Numerical simulations and hardware-in-loop experiments are presented to demonstrate the effectiveness and superiority of the proposed controller.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Lossless image compression based on Kernel Least Mean Squares This paper introduces a novel approach for coding luminance images using kernel-based adaptive filtering and context-adaptive arithmetic coding. This approach tackles the problem that is present in current image and video coders; these coders depend on assumptions of the image and are constrained by the linearity of their predictors. The efficacy of the predictors determines the compression gain. The goal is to create a generic image coder that learns and adapts to the characteristics of the signals and handles nonlinearity in the prediction. Results show that pixel luminance prediction using the Kernel Least Mean Squares (KLMS) yields a significant gain compared to the standard Least Mean Squares algorithm. By coding the residual using a Context-Adaptive Arithmetic Coder (CAAC), the codec is able to outperform the current industry standards of lossless image coding. An average bitrate reduction of more than 2.5% is found for the used test set.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Statistical Lossless Compression of Space Imagery and General Data in a Reconfigurable Architecture This paper investigates an universal algorithm and hardware architecture for context-based statistical lossless compression of multiple types of data using FPGA (Field Programmable Gate Array) devices which support partial and dynamic reconfiguration. The proposed system enables optimal modeling strategies for each source type whilst entropy coding of the modeling output is performed using a statically configured arithmetic coding engine. Spacecraft communications typically involve large amounts of information captured from different sensors that must be transmitted without any loss. The statistical redundancies present in this data can be removed efficiently using the proposed reconfigurable compression technology.
Lossless compression of multispectral image data While spatial correlations are adequately exploited by standard lossless image compression techniques, little success has been attained in exploiting spectral correlations when dealing with multispectral image data. The authors present some new lossless image compression techniques that capture spectral correlations as well as spatial correlation in a simple and elegant manner. The schemes are based on the notion of a prediction tree, which defines a noncausal prediction model for an image. The authors present a backward adaptive technique and a forward adaptive technique. They then give a computationally efficient way of approximating the backward adaptive technique. The approximation gives good results and is extremely easy to compute. Simulation results show that for high spectral resolution images, significant savings can be made by using spectral correlations in addition to spatial correlations. Furthermore, the increase in complexity incurred in order to make these gains is minimal
Near-lossless compression of 3-D optical data Near-lossless compression yielding strictly bounded reconstruction error is proposed for high-quality compression of remote sensing images. A classified causal differential pulse code modulation scheme is presented for optical data, either multi/hyperspectral three-dimensional (3-D) or panchromatic two-dimensional (2-D) observations. It is based on a classified linear-regression prediction, follow...
Low-complexity lossless compression of hyperspectral imagery via linear prediction We present a new low-complexity algorithm for hyperspectral image compression that uses linear prediction in the spectral domain. We introduce a simple heuristic to estimate the performance of the linear predictor from a pixel spatial context and a context modeling mechanism with one-band look-ahead capability, which improves the overall compression with marginal usage of additional memory. The pr...
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section.
Modelling information flow for organisations: A review of approaches and future challenges. Modelling is a classic approach to understanding complex problems that can be achieved diagrammatically to visualise concepts, and mathematically to analyse attributes of concepts. An organisation as a communicating entity is a made up of constructs in which people can have access to information and speak to each other. Modelling information flow for organisations is a challenging task that enables analysts and managers to better understand how to: organise and coordinate processes, eliminate redundant information flows and processes, minimise the duplication of information and manage the sharing of intra- and inter-organisational information.
From Action Systems to Modular Systems Action systems are used to extend program refinement methods for sequential programs, as described in the refinement calculus, to parallel and reactive system refinement. They provide a general description of reactive systems, capable of modeling terminating, possibly aborting and infinitely repeating systems. We show how to extend the action system model to refinement of modular systems. A module may export and import variables, it may provide access procedures for other modules, and it may itself access procedures of other modules. Modules may have autonomous internal activity and may execute in parallel or in sequence. Modules may be nested within each other. They may communicate by shared variables, shared actions, a generalized form of remote procedure calls and by persistent data structures. Both synchronous and asynchronous communication between modules is supported. The paper shows how a single framework can be used for both the specification of large systems, the modular decomposition of the system into smaller units and the refinement of the modules into program modules that can be described in a standard programming language and executed on standard hardware.
A Software Development Environment for Improving Productivity First Page of the Article
Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.010526
0.007143
0.004167
0
0
0
0
0
0
0
0
0
0
Exploring Human Factors in Formal Diagram Usage Formal diagrammatic notations have been developed as alternatives to symbolic specification notations. Ostensibly to aid users in performing comprehension and reasoning tasks, restrictions called wellformedness conditions may be imposed. However, imposing too many of these conditions can have adverse effects on the utility of the notation (e.g. reducing the expressiveness). Understanding the human factors involved in the use of a notation, such as how user-preference and comprehension relate to the imposition of wellformedness conditions, will enable the notation designers to make more informed design decisions. Euler diagrams are a simple visualization of set-theoretic relationships which are the basis of more expressive constraint languages. We have performed exploratory studies with Euler diagrams which indicated that novice user preferences strongly conform to the imposition of all wellformedness conditions, but that even a limited exposure diminishes this preference.
A Normal Form for Euler Diagrams with Shading In logic, there are various normal forms for formulae; for example, disjunctive and conjunctive normal form for formulae of propositional logic or prenex normal form for formulae of predicate logic. There are algorithms for `reducing' a given formula to a semantically equivalent formula in normal form. Normal forms are used in a variety of contexts including proofs of completeness, automated theorem proving, logic programming etc. In this paper, we develop a normal form for unitary Euler diagrams with shading. We give an algorithm for reducing a given Euler diagram to a semantically equivalent diagram in normal form and hence a decision procedure for determining whether two Euler diagrams are semantically equivalent. Potential applications of the normal form include clutter reduction and automated theorem proving in systems based on Euler diagrams.
Evaluating the Comprehension of Euler Diagrams We describe an empirical investigation into layout criteria that can help with the comprehension of Euler diagrams. Euler diagrams are used to represent set inclusion in applications such as teaching set theory, database querying, software engineering, filing system organisation and bio-informatics. Research in automatically laying out Euler diagrams for use with these applications is at an early stage, and our work attempts to aid this research by informing layout designers about the importance of various Euler diagram aesthetic criteria. The three criteria under investigation were: contour jaggedness, zone area inequality and edge closeness. Subjects were asked to interpret diagrams with different combinations of levels for each of the criteria. Results for this investigation indicate that, within the parameters of the study, all three criteria are important for understanding Euler diagrams and we have a preliminary indication of the ordering of their importance.
The semantics of augmented constraint diagrams Constraint diagrams are a diagrammatic notation which may be used to express logical constraints. They generalize Venn diagrams and Euler circles, and include syntax for quantification and navigation of relations. The notation was designed to complement the Unified Modelling Language in the development of software systems. Since symbols representing quantification in a diagrammatic language can be naturally ordered in multiple ways, some constraint diagrams have more than one intuitive meaning in first-order predicate logic. Any equally expressive notation which is based on Euler diagrams and conveys logical statements using explicit quantification will have to address this problem. We explicitly augment constraint diagrams with reading trees, which provides a partial ordering for the quantifiers (determining their scope as well as their relative ordering). Alternative approaches using spatial arrangements of components, or alphabetical ordering of symbols, for example, can be seen as implicit representations of a reading tree. Whether the reading tree accompanies the diagram explicitly (optimizing expressiveness) or implicitly (simplifying diagram syntax), we show how to construct unambiguous semantics for the augmented constraint diagram.
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
Programmers use slices when debugging Computer programmers break apart large programs into smaller coherent pieces. Each of these pieces: functions, subroutines, modules, or abstract datatypes, is usually a contiguous piece of program text. The experiment reported here shows that programmers also routinely break programs into one kind of coherent piece which is not coniguous. When debugging unfamiliar programs programmers use program pieces called slices which are sets of statements related by their flow of data. The statements in a slice are not necessarily textually contiguous, but may be scattered through a program.
List processing in real time on a serial computer A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical implementations of list processing systems lack this property because allocating a list cell from the heap may cause a garbage collection, which process requires time proportional to the heap size to finish. A real-time list processing system is presented which continuously reclaims garbage, including directed cycles, while linearizing and compacting the accessible cells into contiguous locations to avoid fragmenting the free storage pool. The program is small and requires no time-sharing interrupts, making it suitable for microcode. Finally, the system requires the same average time, and not more than twice the space, of a classical implementation, and those space requirements can be reduced to approximately classical proportions by compact list representation. Arrays of different sizes, a program stack, and hash linking are simple extensions to our system, and reference counting is found to be inferior for many applications.
A new, fast, and efficient image codec based on set partitioning in hierarchical trees Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code
Combining angels, demons and miracles in program specifications The complete lattice of monotonic predicate transformers is interpreted as a command language with a weakest precondition semantics. This command lattice contains Dijkstra's guarded commands as well as miracles. It also permits unbounded nondeterminism and angelic nondeterminism. The language is divided into sublanguages using criteria of demonic and angelic nondeterminism, termination and absence of miracles. We investigate dualities between the sublanguages and how they can be generated from simple primitive commands. The notions of total correctness and refinement are generalized to the command lattice.
Abstract Syntax and Semantics of Visual Languages The effective use of visual languages requires a precise understanding of their meaning. Moreover, it is impossible to prove properties of visual languages like soundness of transformation rules or correctness results without having a formal language definition. Although this sounds obvious, it is surprising that only little work has been done about the semantics of visual languages, and even worse, there is no general framework available for the semantics specification of different visual languages. We present such a framework that is based on a rather general notion of abstract visual syntax. This framework allows a logical as well as a denotational approach to visual semantics, and it facilitates the formal reasoning about visual languages and their properties. We illustrate the concepts of the proposed approach by defining abstract syntax and semantics for the visual languages VEX, Show and Tell and Euler circles. We demonstrate the semantics in action by proving a rule for visual reasoning with Euler circles and by showing the correctness of a Show and Tell program.
Inheritance of proofs The Curry-Howard isomorphism, a fundamental property shared by many type theories, establishes a direct correspondence between programs and proofs. This suggests that the same structuring principles that ease programming should be useful for proving as well. To exploit object-oriented structuring mechanisms for verification, we extend the object-model of Pierce and Turner, based on the higher-order typed X-calculus F less than or equal to(omega), with a logical component. By enriching the (functional) signature of objects with a specification, methods and their correctness proofs are packed together in objects. The uniform treatment of methods and proofs gives rise in a natural way to object-oriented proving principles - including inheritance of proofs, late binding of proofs, and encapsulation of proofs - as analogues to object-oriented programming principles. We have used Lego, a type-theoretic proof checker, to explore the feasibility of this approach. (C) 1998 John Wiley & Sons, Inc.
Characterizing plans as a set of constraints—the model—a framework for comparative analysis This paper presents an approach to representing and manipulating plans based on a model of plans as a set of constraints. The <I-N-OVA> model 1 is used to characterise the plan representation used within O-Plan and to relate this work to emerging formal analyses of plans and planning. This synergy of practical and formal approaches can stretch the formal methods to cover realistic plan representations as needed for real problem solving, and can improve the analysis that is possible for production planning systems.<I-N-OVA> is intended to act as a bridge to improve dialogue between a number of communities working on formal planning theories, practical planning systems and systems engineering process management methodologies. It is intended to support new work on automatic manipulation of plans, human communication about plans, principled and reliable acquisition of plan information, and formal reasoning about plans.
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.2
0.2
0.04
0
0
0
0
0
0
0
0
0
0
Role of data dictionaries in information resource management The role of information resource dictionary systems (data dictionary systems) is important in two important phases of information resource management: First , information requirements analysis and specification, which is a complex activity requiring data dictionary support: the end result is the specification of an “Enterprise Model,” which embodies the major activities, processes, information flows, organizational constraints, and concepts. This role is examined in detail after analyzing the existing approaches to requirements analysis and specification. Second , information modeling which uses the information in the Enterprise Model to construct a formal implementation independent database specification: several information models and support tools that may aid in transforming the initial requirements into the final logical database design are examined. The metadata — knowledge about both data and processes — contained in the data dictionary can be used to provide views of data for the specialized tools that make up the database design workbench. The role of data dictionary systems in the integration of tools is discussed.
SODOS: A software documentation support environment — Its use This paper describes a computerized environment, SODOS (Software Documentation Support), which supports the definition and manipulation of documents used in developing software. An object oriented environment is used as a basis for the SODOS interface. SODOS is built around a Software Life Cycle (SLC) Model that structures all access to the documents stored in the environment. One advantage of this model is that it supports software documentation independent of any fixed methodology that the developers may be using. The main advantage of the system is that it permits traceability through each phase of the Life Cycle, thus facilitating the test and maintenance phases. Finally the effort involved in learning and using SODOS is simplified due to a sophisticated “user-friendly” interface.
A Data Type Approach to the Entity-Relationship Approach
The transformation schema: An extension of the data flow diagram to represent control and timing The data flow diagram has been extensively used to model the data transformation aspects of proposed systems. However, previous definitions of the data flow diagram have not provided a comprehensive way to represent the interaction between the timing and control aspects of a system and its data transformation behavior. This paper describes an extension of the data flow diagram called the transformation schema. The transformation schema provides a notation and formation rules for building a comprehensive system. model, and a set of execution rules to allow prediction of the behavior over time of a system modeled in this way. The notation and formation rules allow depiction of a system as a network of potentially concurrent “centers of activity” (transformations), and of data repositories (stores), linked by communication paths (flows). The execution rules provide a qualitative prediction rather than a quantitative one, describing the acceptance of inputs and the production of outputs by the transformations but not input and output values. The transformation schema permits the creation and evaluation of two different types of system models. In the essential (requirements) model, the schema is used to represent a virtual machine with infinite resources. The elements of the schema depict idealized processing and memory components. In the implementation model, the schema is used to represent a real machine with limited resources, and the results of the execution predict the behavior of an implementation of requirements. The transformations of the schema can depict software running on digital processors, hard-wired digital or analog circuits, and so on, and the stores of the schema can depict disk files, tables in memory, and so on.
Deals among rational agents A formal framework is presented that models commu­ nication and promises in multi-agent interactions. This framework generalizes previous work on cooperation with­ out communication, and shows the ability of communi­ cation to resolve conflicts among agents having disparate goals. Using a deal-making mechanism, agents are able to coordinate and cooperate more easily than in the commu­ nication-free model. In addition, there arc certain types of interactions where communication makes possible mu­ tually beneficial activity that is otherwise impossible to coordinate.
Negoplan: An Expert System Shell for Negotiation Support The authors address a complex, two-party negotiation problem containing the following elements: (1) many negotiation issues that are elements of a negotiating party's position; (2) negotiation goals that can be reduced to unequivocal statements about the problem domain and that represent negotiation issues; (3) a fluid negotiating environment characterized by changing issues and relations between them; and (4) parties negotiating to achieve goals that may change. They describe in some detail the way they logically specify different aspects of negotiation. An application of Negoplan to a labor contract negotiation between the Canadian Paperworkers Union and CIP, Ltd. of Montreal is described.
Resolving Goal Conflicts via Negotiation In non-cooperative multi-agent planning, resolution of multiple conflicting goals is the result of finding compromise solutions. Previous research has dealt with such multi-agent problems where planning goals are well-specified, subgoals can be enumerated, and the utilities associated with subgoals known. Our research extends the domain of problems to include non-cooperative multi-agent interactions where planning goals are ill-specified, subgoals cannot be enumerated, and the associated utilities are not precisely known. We provide a model of goal conflict resolution through negotiation implemented in the PERSUADER, a program that resolves labor disputes. Negotiation is performed through proposal and modification of goal relaxations. Case-Based Reasoning is integrated with the use of multi-attribute utilities to portray tradeoffs and propose novel goal relaxations and compromises. Persuasive arguments are generated and used as a mechanism to dynamically change the agents' utilities so that convergence to an acceptable compromise can be achieved.
Systematic Incremental Validation of Reactive Systems via Sound Scenario Generalization Validating the specification of a reactive system, such as a telephone switching system, traffic controller, or automated network service, is difficult, primarily because it is extremely hard even tostate a set of complete and correct requirements, let alone toprove that a specification satisfies them. In the ISAT project[10], end-user requirements are stated as concrete behavior scenarios, and a multi-functional apprentice system aids the human developer in acquiring and maintaining a specification consistent with the scenarios. ISAT's Validation Assistant (isat-va) embodies a novel, systematic, and incremental approach to validation based on the novel technique ofsound scenario generalization, which automatically states and proves validation lemmas. This technique enablesisat-va to organize the validity proof around a novel knowledge structure, thelibrary of generalized fragments, and provides automated progress tracking and semi-automated help in increasing proof coverage. The approach combines the advantages of software testing and automated theorem proving of formal requirements, avoiding most of their shortcomings, while providing unique advantages of its own.
Formal methods: state of the art and future directions ing with credit is permitted. To copy otherwise, to republish, to post onservers, to redistribute to lists, or to use any component of this work in other works, requires priorspecific permission and/or a fee. Permissions may be requested from Publications Dept, ACMInc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or [email protected] \Delta E.M. Clarke and J.M. WingAbout Programs---Mechanical verification, Specification techniques; F.4.1 [Mathematical Logicand...
Component based design of multitolerant systems The concept of multitolerance abstracts problems in system dependability and provides a basis for improved design of dependable systems. In the abstraction, each source of undependability in the system is represented as a class of faults, and the corresponding ability of the system to deal with that undependability source is represented as a type of tolerance. Multitolerance thus refers to the ability of the system to tolerate multiple fault classes, each in a possibly different way. We present a component based method for designing multitolerance. Two types of components are employed by the method, namely detectors and correctors. A theory of detectors, correctors, and their interference free composition with intolerant programs is developed, which enables stepwise addition of components to provide tolerance to a new fault class while preserving the tolerances to the previously added fault classes. We illustrate the method by designing a fully distributed multitolerant program for a token ring
Visual feedback for validation of informal specifications In automatically synthesizing simulation models from informal specifications, the ambiguity of natural language (English) leads to multiple interpretations The authors report on a system, called the Model Generator, which provides visual feedback showing the interpretation of specification statements that have been automatically translated to a knowledge representation called conceptual graphs. The visual feedback is based on a combination of block diagrams and Petri net graphs
Computer-Aided Computing Formal program design methods are most useful when supported with suitable mechanization. This need for mechanization has long been apparent, but there have been doubts whether verification technology could cope with the problems of scale and complexity. Though there is very little compelling evidence either way at this point, several powerful mechanical verification systems are now available for experimentation. Using SRI's PVS as one representative example, we argue that the technology of...
Invariants come from templates We present a template mechanism which allows collective behavior and its invariants to be expressed in an abstract form. The mechanism supplements a view-based decomposition of distributed collaboration. Together templates and composition allow common idioms of distributed behavior to be specified and verified in an abstract form, and to be integrated in specifications. Two templates from a formal specification of Lamport's Paxos algorithm are given as examples.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.100645
0.1
0.050649
0.002647
0.00047
0.000317
0.000251
0.000156
0.000092
0.000027
0
0
0
0
Algebraic tools for the performance evaluation of discrete event systems In this paper, it is shown that a certain class of Petri nets called event graphs can be represented as linear "time-invariant" flnite-dimensional sys- tems using some particular algebras. This sets the ground on which a theory of these systems can be developped in a manner which is very analogous to that of conventional linear system theory. Part 2 of the paper is devoted to showing some preliminary basic developments in that direction. Indeed, there are several ways in which one can consider event graphs as linear sys- tems: these ways correspond to approaches in the time domain, in the event domain and in a two-dimensional domain. In each of these approaches, a difierent algebra has to be used for models to remain linear. However, the common feature of these algebras is that they all fall into the axiomatic deflnition of "dioids". Therefore, Part 1 of the paper is devoted to a unifled presentation of basic algebraic results on dioids.
Feedback stabilization of some event graph models The authors introduce several notions of stability for event graph models, timed or not. The stability is similar to the boundedness notion for Petri nets. The event graph models can be controlled by an output feedback which takes information from some observable transitions and can disable some controllable transitions. The controller itself is composed of an event graph. In this framework the authors solve the corresponding stabilization problems, i.e., they wonder if such a controller may prevent the explosion of the number of tokens
Sizing of an industrial plant under tight time constraints using two complementary approaches: (max,+) algebra and computer simulation In this article (max,+) spectral theory results are applied in order to solve the problem of sizing in a real-time constrained plant. The process to control is a discrete event dynamic system without conflict. Therefore, it can be modeled by a timed event graph, a class of Petri net, whose behavior can be described with linear equations in the (max,+) algebra. First the sizing of the process without constraint is solved. Then we propose to design a simulation model of the plant to validate the sizing of the process.
Resource optimization and (min,+) spectral theory We show that certain resource optimization problems relative toTimed Event Graphs reduce to linear programs. The auxiliary variables whichallow this reduction can be interpreted in terms of eigenvectors in the (min,+)algebra.Keywords---Resource Optimization, Timed Event Graphs, (max,+) algebra,spectral theory.I. INTRODUCTIONTimed Event Graphs (TEGs) are a subclass of timed Petri netswhich can be used to model deterministic discrete event dynamicsystems subject to saturation and...
The equation A⊗x=B⊗y over (max, +). For the two-sided homogeneous linear equation system A⊗x=B⊗y over (max,+), with no infinite rows or columns in A or B, an algorithm is presented which converges to a finite solution from any finite starting point whenever a finite solution exists. If the finite elements of A, B are all integers, convergence is in a finite number of steps, for which a precise bound can be calculated if moreover one of A, B has only finite elements. The algorithm is thus pseudopolynomial in complexity.
Rapid prototyping of control systems using high level Petri nets This paper presents a rapid prototyping methodology for the carrying out of control systems in which high level Petri nets provide the common framework to integrate the main phases of software development: specification, validation, performance evaluation, implementation.Petri nets are shown to be translatable into Ada program structures concerning processes and their synchronizations.
Visual feedback for validation of informal specifications In automatically synthesizing simulation models from informal specifications, the ambiguity of natural language (English) leads to multiple interpretations The authors report on a system, called the Model Generator, which provides visual feedback showing the interpretation of specification statements that have been automatically translated to a knowledge representation called conceptual graphs. The visual feedback is based on a combination of block diagrams and Petri net graphs
On Overview of KRL, a Knowledge Representation Language
Histograms of Oriented Gradients for Human Detection We study the question of feature sets for robust visual object recognition, adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of Histograms of Oriented Gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
Specification and verification of concurrent systems in CESAR The aim of this paper is to illustrate by an example, the alternating bit protocol, the use of CESAR, an interactive system for aiding the design of distributed applications.
Unintrusive Ways to Integrate Formal Specifications in Practice Formal methods can be neatly woven in with less formal, but more widely-used, industrial-strength methods. We show how to integrate the Larch two-tiered specification method (GHW85a) with two used in the waterfall model of software development: Structured Analysis (Ros77) and Structure Charts (YC79). We use Larch traits to define data elements in a data dictionary and the functionality of basic activities in Structured Analysis data-flow diagrams; Larc h interfaces and traits to define the behavior of modules in Structure Charts. We also show how to integrate loosely formal specification in a prototyping model by discussing ways of refining Larch specifications as code evolves. To provide some realism to our ideas, we draw our examples from a non-trivial Larch specification of the graphical editor for the Miro visual languages (HMT +90). The companion technical report, CMU-CS-91-111, contains the entire specification.
Viewpoints: Requirements honesty This article discusses issues related to the inconsistency between requirements principles and the need for faster and faster ways of developing software. Requirements princi- ples are related to the purpose of the system and to the ap- propriateness of requirements that correctly describe what is necessary for the system to fulfil its objectives. I argue that the quest for speed in software development may have the undesirable effect of weakening these principles. Since the beginnings of software engineering, there is a search for faster ways to develop software. Many tech- niques and development models have been proposed that contribute for shortening development time, although the reduction in time comes almost as a side effect, as a re- sult of improving some key aspect of software development. Agile methods are the first to place time-to-market as the prominent feature. The risk is to view other quality features as secondary.
An algorithm for blob hierarchy layout We present an algorithm for the aesthetic drawing of basic hierarchical blob structures, of the kind found in higraphs and statecharts and in other diagrams in which hierarchy is depicted as topological inclusion. Our work could also be useful in window system dynamics, and possibly also in things like newspaper layout, etc. Several criteria for aesthetics are formulated, and we discuss their motivation, our methods of implementation and the algorithm's performance.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.066553
0.085687
0.085687
0.076354
0.047265
0.000003
0
0
0
0
0
0
0
0
Delay-dependent stabilization condition for T-S fuzzy neutral systems In this paper, the stabilization problems for a class of Takagi-Sugeno (T-S) fuzzy neutral systems are explored. Utilizing Pólya's theorem and some homogeneous polynomials techniques, the delay-dependent stabilization condition for T-S fuzzy neutral systems are proposed in terms of a linear matrix inequality (LMI) to guarantee the asymptotic stabilization of T-S fuzzy neutral systems. Lastly, an example is illustrated to demonstrate the effectiveness and applicability of the proposed method.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Exponential Stability, Passivity, and Dissipativity Analysis of Generalized Neural Networks With Mixed Time-Varying Delays In this paper, we analyze the exponential stability, passivity, and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\boldsymbol {(\mathfrak {Q},\mathfrak {S},\mathfrak {R})}$ </tex-math></inline-formula> - <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\boldsymbol {\gamma }$ </tex-math></inline-formula> -dissipativity of generalized neural networks (GNNs) including mixed time-varying delays in state vectors. Novel exponential stability, passivity, and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\boldsymbol {(\mathfrak {Q},\mathfrak {S},\mathfrak {R})}$ </tex-math></inline-formula> - <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\boldsymbol {\gamma }$ </tex-math></inline-formula> -dissipativity criteria are developed in the form of linear matrix inequalities for continuous-time GNNs by constructing an appropriate Lyapunov-Krasovskii functional (LKF) and applying a new weighted integral inequality for handling integral terms in the time derivative of the established LKF for both single and double integrals. Some special cases are also discussed. The superiority of employing the method presented in this paper over some existing methods is verified by numerical examples.
Adaptive Event-Triggered Synchronization of Reaction–Diffusion Neural Networks This article focuses on the design of an adaptive event-triggered sampled-data control (ETSDC) mechanism for synchronization of reaction-diffusion neural networks (RDNNs) with random time-varying delays. Different from the existing ETSDC schemes with predetermined constant thresholds, an adaptive ETSDC mechanism is proposed for RDNNs. The adaptive ETSDC mechanism can be promptly adaptively adjusted since the threshold function is based on the current sampled and latest transmitted signals. Thus, the adaptive ETSDC mechanism can effectively save communication resources for RDNNs. By taking the influence of uncertain factors, the random time-varying delays are considered, which belongs to two intervals in a probabilistic way. Then, by constructing an appropriate Lyapunov-Krasovskii functional (LKF), new synchronization criteria are derived for RDNNs. By solving a set of linear matrix inequalities (LMIs), the desired adaptive ETSDC gain is obtained. Finally, the merits of the adaptive ETSDC mechanism and the effectiveness of the proposed results are verified by one numerical example.
Memory-Based Continuous Event-Triggered Control for Networked T–S Fuzzy Systems Against Cyberattacks This article investigates the problem of resilient control for the Takagi–Sugeno (T–S) fuzzy systems against bounded cyberattack. A novel memory-based event triggering mechanism (ETM) is developed, by which the past information of the physical process through the window function is utilized. Using such an ETM cannot only lead to a lower data-releasing rate but also reduce the occurrence of wrong t...
New Stability Criteria Of Singular Systems With Time-Varying Delay Via Free-Matrix-Based Integral Inequality This paper concerns the stability problem of singular systems with time-varying delay. First, the singular system with time-varying delay is transformed into the neutral system with time-varying delay. Second, a more proper Lyapunov-Krasovskii functional (LKF) is constructed by adding some integral terms to quadratic forms. Then, to obtain less conservative conditions, the free-matrix-based integral inequality is adopted to estimate the derivative of LKF. As a result, some delay-dependent stability criteria are given in terms of linear matrix inequalities. Finally, two numerical examples are provided to demonstrate the effectiveness and superiority of the proposed method.
New stability results for delayed neural networks. This paper is concerned with the stability for delayed neural networks. By more fully making use of the information of the activation function, a new Lyapunov–Krasovskii functional (LKF) is constructed. Then a new integral inequality is developed, and more information of the activation function is taken into account when the derivative of the LKF is estimated. By Lyapunov stability theory, a new stability result is obtained. Finally, three examples are given to illustrate the stability result is less conservative than some recently reported ones.
Reliable control for linear systems with time-varying delays and parameter uncertainties. In this paper, reliable control for linear systems with time-varying delays and parameter uncertainties is considered. By constructing newly augmented Lyapunov–Krasovskii functionals and utilizing some mathematical techniques such as Leibnitz's rule, Schur's complement, reciprocally convex combination, and so on, a reliable controller design method for linear systems with time-varying delays and parameter uncertainties will be suggested in Theorem 1. Based on the result of Theorem 1, a non-reliable stabilization criterion will be presented in Corollary 1. Theorem 1 and Corollary 1 are derived within the framework of linear matrix inequalitiesLMIs which can be easily solved by utilizing various optimization algorithms. Two numerical examples are included to show the effectiveness and necessity of the proposed results.
New results on stability analysis for systems with discrete distributed delay The integral inequality technique is widely used to derive delay-dependent conditions, and various integral inequalities have been developed to reduce the conservatism of the conditions derived. In this study, a new integral inequality was devised that is tighter than existing ones. It was used to investigate the stability of linear systems with a discrete distributed delay, and a new stability condition was established. The results can be applied to systems with a delay belonging to an interval, which may be unstable when the delay is small or nonexistent. Three numerical examples demonstrate the effectiveness and the smaller conservatism of the method.
A looped-functional approach for robust stability analysis of linear impulsive systems A new functional-based approach is developed for the stability analysis of linear impulsive systems. The new method, which introduces looped functionals, considers non-monotonic Lyapunov functions and leads to LMI conditions devoid of exponential terms. This allows one to easily formulate dwell-time results, for both certain and uncertain systems. It is also shown that this approach may be applied to a wider class of impulsive systems than existing methods. Some examples, notably on sampled-data systems, illustrate the efficiency of the approach.
Petri nets: Properties, analysis and applications Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets
Queue-based multi-processing LISP As the need for high-speed computers increases, the need for multi-processors will be become more apparent. One of the major stumbling blocks to the development of useful multi-processors has been the lack of a good multi-processing language—one which is both powerful and understandable to programmers. Among the most compute-intensive programs are artificial intelligence (AI) programs, and researchers hope that the potential degree of parallelism in AI programs is higher than in many other applications. In this paper we propose multi-processing extensions to Lisp. Unlike other proposed multi-processing Lisps, this one provides only a few very powerful and intuitive primitives rather than a number of parallel variants of familiar constructs.
Indexing hypertext documents in context
ENIAM: a more complete conceptual schema language
TAER: time-aware entity retrieval-exploiting the past to find relevant entities in news articles Retrieving entities instead of just documents has become an important task for search engines. In this paper we study entity retrieval for news applications, and in particular the importance of the news trail history (i.e., past related articles) in determining the relevant entities in current articles. This is an important problem in applications that display retrieved entities to the user, together with the news article. We analyze and discuss some statistics about entities in news trails, unveiling some unknown findings such as the persistence of relevance over time. We focus on the task of query dependent entity retrieval over time. For this task we evaluate several features, and show that their combinations significantly improves performance.
On backwards and forwards reachable sets bounding for perturbed time-delay systems Linear systems with interval time-varying delay and unknown-but-bounded disturbances are considered in this paper. We study the problem of finding outer bound of forwards reachable sets and inter bound of backwards reachable sets of the system. Firstly, two definitions on forwards and backwards reachable sets, where initial state vectors are not necessary to be equal to zero, are introduced. Then, by using the Lyapunov-Krasovskii method, two sufficient conditions for the existence of: (i) the smallest possible outer bound of forwards reachable sets; and (ii) the largest possible inter bound of backwards reachable sets, are derived. These conditions are presented in terms of linear matrix inequalities with two parameters need to tuned, which therefore can be efficiently solved by combining existing convex optimization algorithms with a two-dimensional search method to obtain optimal bounds. Lastly, the obtained results are illustrated by four numerical examples.
1.055
0.05
0.05
0.03
0.0125
0.0025
0.001116
0.000009
0
0
0
0
0
0
A Block-Based Inter-Band Lossless Hyperspectral Image Compressor We propose a hyperspectral image compressor called BH which considers its input image as being partitioned into square blocks, each lying entirely within a particular band, and compresses one such block at a time by using the following steps: first predict the block from the corresponding block in the previous band, then select a predesigned code based on the prediction errors, and nally encode the predictor coeffcient and errors. Apart from giving good compression rates and being fast, BH can provide random access to spatial locations in the image. We hypothesize that BH works well because it accommodates the rapidly changing image brightness that often occurs in hyperspectral images. We also propose an intraband compressor called LM which is worse than BH, but whose performance helps explain BH's performance.
Efficient implementation of Edmonds' algorithm for finding optimum branchings on associative parallel processors Abstract: In this paper we propose an efficient parallel implementation of Edmonds' algorithm for finding optimum branchings on a model of the SIMD type with vertical data processing (the STAR-machine). To this end for a directed graph given as a list of triples (edge vertices and the weight), we construct a new associative version of Edmonds' algorithm. This version is represented as the corresponding STAR procedure whose correctness is proved. We obtain that on vertical processing systems Edmonds' algorithm takes O(n log n) time, where n is the number of graph vertices.
Parallel implementation of linear prediction model for lossless compression of hyperspectral airborne visible infrared imaging spectrometer images We present the implementation of a lossless hyperspectral image compression method for novel parallel environments. The method is an interband version of a linear prediction approach for hyperspectral images. The interband linear prediction method consists of two stages: predictive decorrelation that produces residuals and the entropy coding of the residuals. The compression part Is embarrassingly parallel, while the decompression part uses pipelining to parallelize the method. The results and comparisons with other methods are discussed. The speedup of the thread version is almost linear with respect to the number of processors. 2005 SPIE and IS&T.
Prediction Trees and Lossless Image Compression: An Extended Abstract
Low-complexity lossy compression of hyperspectral images via informed quantization Lossy compression of hyperspectral and ultraspectral images is traditionally performed using 3D transform coding. This approach yields good performance, but the complexity and memory requirements make it unsuitable for onboard compression. In this paper we propose a low-complexity lossy compression scheme based on prediction, quantization and rate-distortion optimization. The scheme employs coset codes coupled with the newconcept of “informed quantization”, and requires no entropy coding. The performance of the resulting algorithm is competitive with that of state-of-the-art 3D transform coding schemes, but the complexity is immensely lower, making it suitable for onboard compression at high throughputs.
Lossless compression of AVIRIS images. Adaptive DPCM methods using linear prediction are described for the lossless compression of hyperspectral (224-band) images recorded by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The methods have two stages-predictive decorrelation (which produces residuals) and residual encoding. Good predictors are described, whose performance closely approaches limits imposed by sensor noise. It is imperative that these predictors make use of the high spectral correlations between bands. The residuals are encoded using variable-length coding (VLC) methods, and compression is improved by using eight codebooks whose design depends on the sensor's noise characteristics. Rice (1979) coding has also been evaluated; it loses 0.02-0.05 b/pixel compression compared with better VLC methods but is much simpler and faster. Results for compressing ten AVIRIS images are reported.
Optimized onboard lossless and near-lossless compression of hyperspectral data using CALIC We propose a new lossless and near-lossless compression algorithm for hyperspectral images based on context-based adaptive lossless image coding (CALIC). Specifically, we propose a novel multiband spectral predictor, along with optimized model parameters and optimization thresholds. The resulting algorithm is suitable for compression of data in band-interleaved-by-line format; its performance evaluation on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data shows that it outperforms 3-D-CALIC as well as other state-of-the-art compression algorithms.
3D medical image compression based on multiplierless low-complexity RKLT and shape-adaptive wavelet transform A multiplierless low complexity reversible integer Karhunen-Loe¿ve transform (Low-RKLT) is proposed based on matrix factorization. Conventional methods based on KLT suffer from high computational complexity and unability of applying in lossless medical image compression. To solve the two problems, multiplierless Low-RKLT is investigated using multi-lifting in this paper. Combined with ROI coding method, we have proposed a progressive lossy-to-lossless ROI compression method for three dimensional (3D) medical images with high performance. In our proposed method Low-RKLT is used for the inter-frame decorrelation after SA-DWT in the spatial domain. Simulation results show that, the proposed method performs much better in both lossless and lossy compression than 3D-DWT-based method.
Classified adaptive prediction and entropy coding for lossless coding of images Natural images often consist of many distinct regions with individual characteristics. Adaptive image coders exploit this feature of natural images to obtain better compression results. In this paper, we propose a classification-based scheme for both adaptive prediction and entropy coding in a lossless image coder. In the proposed coder, blocks of image samples (in the PCM domain) are classified to select an appropriate linear predictor from finite set of predictors. Once the predictors have been determined, the image is DPCM coded. A second classification is then performed to select a suitable entropy coder for each block of DPCM samples. These classification schemes are designed using two separate clustering procedures which attempt to minimize the bit-rate of the encoded image. The coder was tested on a set of monochrome images and was found to produce very promising results.
On Formalism in Specifications A critique of a natural-language specification, followed by presentation of a mathematical alternative, demonstrates the weakness of natural language and the strength of formalism in requirements specifications.
A program integration algorithm that accommodates semantics-preserving transformations Given a program Base and two variants, A and B, each created by modifying separate copies of Base, the goal of program integration is to determine whether the modifications interfere, and if they do not, to create an integrated program that includes both sets of changes as well as the portions of Base preserved in both variants. Text-based integration techniques, such as the one used by the UNIX diff3 utility, are obviously unsatisfactory because one has no guarantees about how the execution behavior of the integrated program relates to the behaviors of Base, A, and B. The first program-integration algorithm to provide such guarantees was developed by Horwitz, Prins, and Reps. However, a limitation of that algorithm is that it incorporates no notion of semantics-preserving transformations. This limitation causes the algorithm to be overly conservative in its definition of interference. For example, if one variant changes the way a computation is performed (without changing the values computed) while the other variant adds code that uses the result of the computation, the algorithm would classify those changes as interfering. This paper describes a new integration algorithm that is able to accommodate semantics-preserving transformations.
Argonaute: graphical description, semantics and verification of reactive systems by using a process algebra The Argonaute system is specifically designed to describe, specify and verify reactivesystems such as communication protocols, real-time applications, man-machine interfaces,. . . It is based upon the Argos graphical language, whose syntax relies on theHigraphs formalism by D. Harel [HAR88], and whose semantics is given by using a processalgebra. Automata form the basic notion of the language, and hierarchical or paralleldecompositions are given by using operators of the algebra. The...
The Skip-Innovation Model for Sparse Images On sparse images, contiguous runs of identical symbols often occur in the same coding context. This paper proposes a model for efficiently encoding such runs in a two-dimensional setting. Because it is model based, the method can be used with any coding scheme. An experimental coder using the model compresses the CCITT fax documents 2% better than JBIG and is more than three times as fast. A low complexity application of the model is shown to dramatically improve the compression performance of JPEG-LS on structured material.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.029669
0.029515
0.02932
0.014807
0.010923
0.005858
0.00234
0.000287
0.000092
0
0
0
0
0
Visual language implementation through standard compiler-compiler techniques We present a technique for implementing visual language compilers through standard compiler generation platforms. The technique exploits eXtended Positional Grammars (XPGs, for short) for modeling the visual languages in a natural way, and uses a set of mapping rules to translate an XPG specification into a translation schema. This lets us generate visual language parsers through standard compiler-compiler techniques and tools like YACC. The generated parser accepts exactly the same set of visual sentences derivable through the application of XPG productions. The technique represents an important achievement, since it enables us to perform visual language compiler construction through standard compiler-compilers rather than specific compiler generation tools. This makes our approach particularly appealing, since compiler-compilers are widely used and rely on a well-founded theory. Moreover, the approach provides the basis for the unification of traditional textual language technologies and visual language compiler technologies.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Pragmatic Understanding of "Knowing That" and "Knowing How": The Pivotal Role of Conceptual Structures What is the difference between knowing that a cake is baked and knowing how to bake a cake? In each, the core concepts are the same, cake and baking, yet there seems to be a significant difference. The classical distinction between knowing that and knowing how points to the pivotal role of conceptual structures in both reasoning about and using knowledge. Peirce's recognition of this pivotal role is most clearly seen in the pragmatic maxim that links theoretical and practical maxims. By extending Peirce's pragmatism with the notion of a general argument pattern, the relation between conceptual structures and these ways of knowing can be understood in terms of the filling instructions for concepts. Since a robust account of conceptual structures must be able to handle both the context of knowing that and knowing how, it would seem reasonable to think that there will be multiple representations for the filling instructions. This in turn suggests that a methodological principle of tolerance between those approaches that stress the theoretical understanding of concepts appropriate to knowing that and those that stress the proceduralist understanding of concepts appropriate to knowing how is desirable.
Report from the Joint W3C/IETF URI Planning Interest Group: Uniform Resource Identifiers (URIs), URLs, and Uniform Resource Names (URNs): Clarifications and Recommendations
Modeling Real Reasoning In this article we set out to develop a mathematical model of real-life human reasoning. The most successful attempt to do this, classical formal logic, achieved its success by restricting attention on formal reasoning within pure mathematics; more precisely, the process of proving theorems in axiomatic systems. Within the framework of mathematical logic, a logical proof consists of a finite sequence σ 1, σ 2, ..., σ n of statements, such that for each i = 1,..., n, σ i is either an assumption for the argument (possibly an axiom), or else follows from one or more of σ 1, ..., σ i − 1 by a rule of logic.
The symbol grounding problem There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the “symbol grounding problem”: How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations , which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations , which are learned and innate feature detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations , grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g. “An X is a Y that is Z ”). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic “module,” however; the symbolic functions would emerge as an intrinsically “dedicated” symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded.
On visual formalisms The higraph, a general kind of diagramming object, forms a visual formalism of topological nature. Higraphs are suited for a wide array of applications to databases, knowledge representation, and, most notably, the behavioral specification of complex concurrent systems using the higraph-based language of statecharts.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research.
Optimal, efficient, recursive edge detection filters The design of an optimal, efficient, infinite-impulse-response (IIR) edge detection filter is described. J. Canny (1986) approached the problem by formulating three criteria designed in any edge detection filter: good detection, good localization, and low spurious response. He maximized the product of the first two criteria while keeping the spurious response criterion constant. Using the variational approach, he derived a set of finite extent step edge detection filters corresponding to various values of the spurious response criterion, approximating the filters by the first derivative of a Gaussian. A more direct approach is described in this paper. The three criteria are formulated as appropriate for a filter of infinite impulse response, and the calculus of variations is used to optimize the composite criteria. Although the filter derived is also well approximated by first derivative of a Gaussian, a superior recursively implemented approximation is achieved directly. The approximating filter is separable into two linear filters operating in two orthogonal directions allowing for parallel edge detection processing. The implementation is very simple and computationally efficient
Design problem solving: a task analysis I propose a task structure for design by analyzing a general class of methods that I call propose- critique-modify methods. The task structure is constructed by identifying a range of methods for each task. For each method, the knowledge needed and the subtasks that it sets up are iden- tified. This recursive style of analysis provides a framework in which we can understand a number of particular proposals for design prob- lem solving as specific combinations of tasks, methods, and subtasks. Most of the subtasks are not really specific to design as such. The analy- sis shows that there is no one ideal method for design, and good design problem solving is a result of recursively selecting methods based on a number of criteria, including knowledge avail- ability. How the task analysis can help in knowledge acquisition and system design is dis- cussed.
Beyond models and metaphors: visual formalisms in user interface design The user interface has both syntactic functions-supplying commands and arguments to programs-and semantic functions-visually presenting application semantics and supporting problem solving cognition. The authors argue that though both functions are important, it is time to devote more resources to the problems of the semantic interface. Complex problem solving activities, e.g. for design and analysis tasks, benefit from clear visualizations of application semantics in the user interface. Designing the semantic interface requires computational building blocks capable of representing and visually presenting application semantics in a clear, precise way. The authors argue that neither mental models not metaphors provide a basis for designing and implementing such building blocks, but that visual formalisms do. They compare the benefits of mental models, metaphors and visual formalisms as the basis for designing the user interface, with particular attention to the practical solutions each provides to application developers
A Software Development Environment for Improving Productivity First Page of the Article
The navigation toolkit The problem
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.24
0.24
0.24
0.12
0.00013
0
0
0
0
0
0
0
0
0
Computer-assisted analysis and refinement of informal software requirements documents This paper describes RARE (Reuse-Assisted Requirements Elicitation), a method enabling software requirements engineers to process informal software requirements effectively. RARE's object is to assist analysts in transforming requirements expressed in natural language into a comprehensive collection of rigorous specifications that can be used as a starting point in software development. However, unlike other approaches to managing requirements documents, RARE focuses on the application of reuse-intensive methods of dealing with requirements documents, their contents and structure, and the processes involved in the analysis and refinement of requirements texts. The RARE method circumscribes an iterative process of planning, gathering and elaboration analysis, refinement, adaptation, integration and validation of requirements texts. The paper also describes the operation of IDIOM (Informal Document Interpreter, Organiser and Manager), a requirements management tool that supports the RARE method
Requirements Classification and Reuse: Crossing Domain Boundaries A serious problem in the classification of software project artefacts for reuse is the natural partitioning of classification terms into many separate domains of discourse. This problem is particularly pronounced when dealing with requirements artefacts that need to be matched with design components in the refinement process. in such a case, requirements can be described with terms drawn from a problem domain (e.g. games), whereas designs with the use of terms characteristic for the solution domain (e.g. implementation). The two domains have not only distinct terminology, but also different semantics and use of their artefacts. This paper describes a method of cross-domain classification of requirements texts with a view to facilitate their reuse and their refinement into reusable design components.
AbstFinder, A Prototype Natural Language Text Abstraction Finder for Use in Requirements Elicitation Abstraction identification is named as a key problem in requirementsanalysis. Typically, the abstrac-tions must be found among the large mass ofnatural language text collected from the clients and users. This papermotivates and describes a new approach, based on traditional signalprocessing methods. for finding abstractions in natural language text andoffers a new tool, AbstFinder as an implementation of this approach. Theadvantages and disadvantages of the approach and the design of the tool arediscussed in detail. Various scenarios for use of the tool are offered. Someof these scenarios were used in case study of the effectiveness of rhe toolon an industrial-strength example of finding abstractions in a request forproposals.
NL-OOPS: from natural language to object oriented requirements using the natural language processing system LOLITA This paper describes NL-OOPS, a CASE tool that supports requirements analysis by generating object oriented models from natural language requirements documents. The full natural language analysis is obtained using as a core system the Natural Language Processing System LOLITA. The object oriented analysis module implements an algorithm for the extraction of the objects and their associations for use in creating object models.
Making Workflow Change Acceptable Virtual professional communities are supported by network information systems composed from standard Internet tools. To satisfy the interests of all community members, a user-driven approach to requirements engineering is proposed that produces not only mean- ingful but also acceptable specifications. This approach is especially suited for workflow systems that support partially structured, evolving work processes. To ensure the acceptability, social norms must guide the specifica- tion process. The RENISYS specification method is introduced, which facilitates this process using composi- tion norms as formal representations of social norms. Conceptual graph theory is used to represent four categories of knowledge definitions: type definitions, state definitions, action norms and composition norms. It is shown how the composition norms guide the legitimate user-driven specification process by analysing a case on the development of an electronic law journal.
Conceptual Structures: Fulfilling Peirce's Dream, Fifth International Conference on Conceptual Structures, ICCS '97, Seattle, Washington, USA, August 3-8, 1997, Proceedings
A Total System Design Framework First Page of the Article
The Three Dimensions of Requirements Engineering Requirements engineering (RE) is perceived as an area of growing im- portance. Due to the increasing effort spent for research in this area many con- tributions to solve different problems within RE exist. The purpose of this paper is to identify the main goals to be reached during the requirements engineering process in order to develop a framework for RE. This framework consists of the three dimensions:
Flow Sketch Methodology: A Practical Requirements Definition Technique Based on Data Flow Concept This paper discusses a new simple methodology for defining software system requirements. We have developed a practical approach which we call FS (Flow Sketch) methodology. This methodology, based on the data flow concept, has been developed to provide the precise means of user's requirements. Actually, the user's requirements are presented in data form by particular format cards. Data are classified and the relationships between data are decided through brainstorming. Then, a requirement definition model is defined. FS methodology employs diagrammatic notation. This notation is suitable for the visual and the interactive description of the dynamic system data flow. As a result, misunderstandings of the software system between the software producer and software user will decrease.
Compact chart: a program logic notation with high describability and understandability This paper describes an improved flow chart notation, Compact Chart, developed because the flow chart conception is effective in constructing program logics, but the conventional notation for it is ineffective.By introducing the idea of separation of control transfer and process description, Compact Charting gives an improved method of representing and understanding program logics.
A fixpoint semantics of event systems with and without fairness assumptions We present a fixpoint semantics of event systems. The semantics is presented in a general framework without concerns of fairness. Soundness and completeness of rules for deriving leads-to properties are proved in this general framework. The general framework is instantiated to minimal progress and weak fairness assumptions and similar results are obtained. We show the power of these results by deriving sufficient conditions for leads-to under minimal progress proving soundness of proof obligations without reasoning over state-traces.
Making Distortions Comprehensible This paper discusses visual information representation from the perspective of human comprehension. The distortion viewing paradigm is an appropriate focus for this discussion as its motivation has always been to create more understandable displays. While these techniques are becoming increasingly popular for exploring images that are larger than the available screen space, in fact users sometimes report confusion and disorientation. We provide an overview of structural changes made in response to this phenomenon and examine methods for incorporating visual cues based on human perceptual skills.
A framework for analyzing and testing requirements with actors in conceptual graphs Software has become an integral part of many people's lives, whether knowingly or not. One key to producing quality software in time and within budget is to efficiently elicit consistent requirements. One way to do this is to use conceptual graphs. Requirements inconsistencies, if caught early enough, can prevent one part of a team from creating unnecessary design, code and tests that would be thrown out when the inconsistency was finally found. Testing requirements for consistency early and automatically is a key to a project being within budget. This paper will share an experience with a mature software project that involved translating software requirements specification into a conceptual graph and recommends several actors that could be created to automate a requirements consistency graph.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.1161
0.12
0.040741
0.02
0.008
0.000853
0.000088
0.000012
0
0
0
0
0
0
Learning Analytics Dashboards to Support Adviser-Student Dialogue. This paper presents LISSA (“Learning dashboard for Insights and Support during Study Advice”), a learning analytics dashboard designed, developed, and evaluated in collaboration with study advisers. The overall objective is to facilitate communication between study advisers and students by visualizing grade data that is commonly available in any institution. More specifically, the dashboard attemp...
A Semantically Enriched Context-Aware OER Recommendation Strategy and Its Application to a Computer Science OER Repository This paper describes a knowledge-based strategy for recommending educational resources—worked problems, exercises, quiz questions, and lecture notes—to learners in the first two courses in the introductory sequence of a computer science major (CS1 and CS2). The goal of the recommendation strategy is to provide support for personalized access to the resources that exist in open educational repositories. The strategy uses: 1) a description of the resources based on metadata standards enriched by ontology-based semantic indexing, and 2) contextual information about the user (her knowledge of that particular field of learning). The results of an experimental analysis of the strategy's performance are presented. These demonstrate that the proposed strategy offers a high level of personalization and can be adapted to the user. An application of the strategy to a repository of computer science open educational resources was well received by both educators and students and had promising effects on the student performance and dropout rates.
Evaluating Recommender Systems for Technology Enhanced Learning: A Quantitative Survey The increasing number of publications on recommender systems for Technology Enhanced Learning (TEL) evidence a growing interest in their development and deployment. In order to support learning, recommender systems for TEL need to consider specific requirements, which differ from the requirements for recommender systems in other domains like e-commerce. Consequently, these particular requirements motivate the incorporation of specific goals and methods in the evaluation process for TEL recommender systems. In this article, the diverse evaluation methods that have been applied to evaluate TEL recommender systems are investigated. A total of 235 articles are selected from major conferences, workshops, journals and books where relevant work have been published between 2000 and 2014. These articles are quantitatively analysed and classified according to the following criteria: type of evaluation methodology, subject of evaluation, and effects measured by the evaluation. Results from the survey suggest that there is a growing awareness in the research community of the necessity for more elaborate evaluations. At the same time, there is still substantial potential for further improvements. This survey highlights trends and discusses strengths and shortcomings of the evaluation of TEL recommender systems thus far, thereby aiming to stimulate researchers to contemplate novel evaluation approaches.
A Comparative Framework to Evaluate Recommender Systems in Technology Enhanced Learning: a Case Study. When proposing a novel recommender system, one difficult part is its evaluation. Especially in Technology Enhanced Learning (TEL), this phase is critical because those systems influence students or educators in educational tasks. Our research aims to propose a framework for conducting comparative experiments of different recommender systems in a same educational context. The framework is expected to provide the accuracy of subject systems within a single experiment, depicting the benefits of a novel system against others. We also present an application of such framework for a comparative experiment of popular systems in TEL like Google, Slideshare, Youtube, MERLOT, Connexions and ARIADNE. Our results show that the proposed framework has been effective in comparing the accuracy of those systems, with a clear picture of their performance compared one another. Moreover, the results of the experiment can be used as a benchmark when evaluating novel recommender systems in TEL.
Student Performance Prediction Using Collaborative Filtering Methods This paper shows how to utilize collaborative filtering methods for student performance prediction. These methods are often used in recommender systems. The basic idea of such systems is to utilize the similarity of users based on their ratings of the items in the system. We have decided to employ these techniques in the educational environment to predict student performance. We calculate the similarity of students utilizing their study results, represented by the grades of their previously passed courses. As a real-world example we show results of the performance prediction of students who attended courses at Masaryk University. We describe the data, processing phase, evaluation, and finally the results proving the success of this approach.
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Feedback stabilization of some event graph models The authors introduce several notions of stability for event graph models, timed or not. The stability is similar to the boundedness notion for Petri nets. The event graph models can be controlled by an output feedback which takes information from some observable transitions and can disable some controllable transitions. The controller itself is composed of an event graph. In this framework the authors solve the corresponding stabilization problems, i.e., they wonder if such a controller may prevent the explosion of the number of tokens
Automated consistency checking of requirements specifications This article describes a formal analysis technique, called consistency checking, for automatic detection of errors, such as type errors, nondeterminism, missing cases, and circular definitions, in requirements specifications. The technique is designed to analyze requirements specifications expressed in the SCR (Software Cost Reduction) tabular notation. As background, the SCR approach to specifying requirements is reviewed. To provide a formal semantics for the SCR notation and a foundation for consistency checking, a formal requirements model is introduced; the model represents a software system as a finite-state automation which produces externally visible outputs in response to changes in monitored environmental quantities. Results of two experiments are presented which evaluated the utility and scalability of our technique for consistency checking in real-world avionics application. The role of consistency checking during the requirements phase of software development is discussed.
Further Improvement of Free-Weighting Matrices Technique for Systems With Time-Varying Delay A novel method is proposed in this note for stability analysis of systems with a time-varying delay. Appropriate Lyapunov functional and augmented Lyapunov functional are introduced to establish some improved delay-dependent stability criteria. Less conservative results are obtained by considering the additional useful terms (which are ignored in previous methods) when estimating the upper bound of the derivative of Lyapunov functionals and introducing the new free-weighting matrices. The resulting criteria are extended to the stability analysis for uncertain systems with time-varying structured uncertainties and polytopic-type uncertainties. Numerical examples are given to demonstrate the effectiveness and the benefits of the proposed method
Protocol verification as a hardware design aid The role of automatic formal protocol verificationin hardware design is considered. Principlesare identified that maximize the benefits of protocolverification while minimizing the labor andcomputation required. A new protocol descriptionlanguage and verifier (both called Mur") are described,along with experiences in applying themto two industrial protocols that were developed aspart of hardware designs.1 IntroductionMost complex digital designs must be regarded as...
Executable requirements for embedded systems An approach to requirements specification for embedded systems, based on constructing an executable model of the proposed system interacting with its environment, is proposed. The approach is explained, motivated, and related to data-oriented specification techniques. Portions of a specification language embodying it are introduced, and illustrated with an extended example in which the requirements for a process-control system are developed incrementally.
Involutions On Relational Program Calculi The standard Galois connection between the relational and predicate-transformer models of sequential programming (defined in terms of weakest precondition) confers a certain similarity between them. This paper investigates the extent to which the important involution on transformers (which, for instance, interchanges demonic and angelic nondeterminism, and reduces the two kinds of simulation in the relational model to one kind in the transformer model) carries over to relations. It is shown that no exact analogue exists; that the two complement-based involutions are too weak to be of much use; but that the translation to relations of transformer involution under the Galois connection is just strong enough to support Boolean-algebra style reasoning, a claim that is substantiated by proving properties of deterministic computations. Throughout, the setting is that of the guarded-command language augmented by the usual specification commands; and where possible algebraic reasoning is used in place of the more conventional semantic reasoning.
Reactive and Real-Time Systems Course: How to Get the Most Out of it The paper describes the syllabus and the students’ projects from a graduate course on the subject of “Reactive and Real-Time Systems”, taught at Tel-Aviv University and at the Open University of Israel. The course focuses on the development of provably correct reactive real-time systems. The course combines theoretical issues with practical implementation experience, trying to make things as tangible as possible. Hence, the mathematical and logical frameworks introduced are followed by presentation of relevant software tools and the students’ projects are implemented using these tools. The course is planned so that no special purpose hardware is needed and so that all software tools used are freely available from various Internet sites and can be installed quite easily. This makes our course attractive to institutions and instructors for which purchasing and maintaining a special lab is not feasible due to budget, space, or time limitations (as in our case). In the paper we elaborate on the rationale behind the syllabus and the selection of the students’ projects, presenting an almost complete description of a sample design of one team’s project.
Reversible Denoising and Lifting Based Color Component Transformation for Lossless Image Compression An undesirable side effect of reversible color space transformation, which consists of lifting steps (LSs), is that while removing correlation it contaminates transformed components with noise from other components. Noise affects particularly adversely the compression ratios of lossless compression algorithms. To remove correlation without increasing noise, a reversible denoising and lifting step (RDLS) was proposed that integrates denoising filters into LS. Applying RDLS to color space transformation results in a new image component transformation that is perfectly reversible despite involving the inherently irreversible denoising; the first application of such a transformation is presented in this paper. For the JPEG-LS, JPEG 2000, and JPEG XR standard algorithms in lossless mode, the application of RDLS to the RDgDb color space transformation with simple denoising filters is especially effective for images in the native optical resolution of acquisition devices. It results in improving compression ratios of all those images in cases when unmodified color space transformation either improves or worsens ratios compared with the untransformed image. The average improvement is 5.0–6.0% for two out of the three sets of such images, whereas average ratios of images from standard test-sets are improved by up to 2.2%. For the efficient image-adaptive determination of filters for RDLS, a couple of fast entropy-based estimators of compression effects that may be used independently of the actual compression algorithm are investigated and an immediate filter selection method based on the detector precision characteristic model driven by image acquisition parameters is introduced.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Reversible image steganographic scheme via predictive coding. The reversible image steganographic scheme in this study provides the ability to embed secret data into a host image and then recover the host image without losing any information when the secret data is extracted. In this paper, a reversible image steganographic scheme based on predictive coding is proposed by embedding secret data into compression codes during the lossless image compression. The proposed scheme effectively provides a lossless hiding mechanism in the compression domain. During the predictive coding stage, the proposed scheme embeds secret data into error values by referring to a hiding-tree. In an entropy decoding stage, the secret data can be extracted by referring to the hiding-tree, and the host image can be recovered during the predictive decoding stage. The experimental results show that the average hiding capacity of the proposed scheme is 0.992bits per pixel (bpp), and the host image can be reconstructed without losing any information when the secret data is extracted.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Improved Stability Analysis for Delayed Neural Networks. In this brief, by constructing an augmented Lyapunov-Krasovskii functional in a triple integral form, the stability analysis of delayed neural networks is investigated. In order to exploit more accurate bounds for the derivatives of triple integrals, new double integral inequalities are developed, which include some recently introduced estimation techniques as special cases. The information on the...
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Modeling Requirements with Goals in Virtual University Environment In recent years, goal-based requirements analysis methods have attracted an increasing attention in the area of requirements engineering. However, there is no systematic way in the existing approaches to handling the impacts of requirements on the structuring of software architecture. As an attempt towards the investigation of the interactions among goals, scenarios, and software architectures, we proposed, in this paper, a goal-based approach to building software architecture based on the interactions in an incremental fashion.The proposed approach is illustrated using the problem domain of virtual university environment.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Multivariate Heuristic Model for Fuzzy Time-Series Forecasting Fuzzy time-series models have been widely applied due to their ability to handle nonlinear data directly and because no rigid assumptions for the data are needed. In addition, many such models have been shown to provide better forecasting results than their conventional counterparts. However, since most of these models require complicated matrix computations, this paper proposes the adoption of a multivariate heuristic function that can be integrated with univariate fuzzy time-series models into multivariate models. Such a multivariate heuristic function can easily be extended and integrated with various univariate models. Furthermore, the integrated model can handle multiple variables to improve forecasting results and, at the same time, avoid complicated computations due to the inclusion of multiple variables.
A neural network-based fuzzy time series model to improve forecasting Neural networks have been popular due to their capabilities in handling nonlinear relationships. Hence, this study intends to apply neural networks to implement a new fuzzy time series model to improve forecasting. Differing from previous studies, this study includes the various degrees of membership in establishing fuzzy relationships, which assist in capturing the relationships more properly. These fuzzy relationships are then used to forecast the stock index in Taiwan. With more information, the forecasting is expected to improve, too. In addition, due to the greater amount of information covered, the proposed model can be used to forecast directly regardless of whether out-of-sample observations appear in the in-sample observations. This study performs out-of-sample forecasting and the results are compared with those of previous studies to demonstrate the performance of the proposed model.
Multi-attribute fuzzy time series method based on fuzzy clustering Traditional time series methods can predict the seasonal problem, but fail to forecast the problems with linguistic value. An alternative forecasting method such as fuzzy time series is utilized to deal with these kinds of problems. Two shortcomings of the existing fuzzy time series forecasting methods are that they lack persuasiveness in determining universe of discourse and the length of intervals, and that they lack objective method for multiple-attribute fuzzy time series. This paper introduces a novel multiple-attribute fuzzy time series method based on fuzzy clustering. The methods of fuzzy clustering are integrated in the processes of fuzzy time series to partition datasets objectively and enable processing of multiple attributes. For verification, this paper uses two datasets: (1) the yearly data on enrollments at the University of Alabama, and (2) the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) futures. The forecasting results show that the proposed method can forecast not only one-attribute but also multiple-attribute data effectively and outperform the listing methods.
AN ENHANCED DETERMINISTIC FUZZY TIME SERIES FORECASTING MODEL The study of fuzzy time series has attracted great interest and is expected to expand rapidly. Various forecasting models including high-order models have been proposed to improve forecasting accuracy or reducing computational cost. However, there exist two important issues, namely, rule redundancy and high-order redundancy that have not yet been investigated. This article proposes a novel forecasting model to tackle such issues. It overcomes the major hurdle of determining the k-order in high-order models and is enhanced to allow the handling of multi-factor forecasting problems by removing the overhead of deriving all fuzzy logic relationships beforehand. Two novel performance evaluation metrics are also formally derived for comparing performances of related forecasting models. Experimental results demonstrate that the proposed forecasting model outperforms the existing models in efficiency.
A FCM-based deterministic forecasting model for fuzzy time series The study of fuzzy time series has increasingly attracted much attention due to its salient capabilities of tackling uncertainty and vagueness inherent in the data collected. A variety of forecasting models including high-order models have been devoted to improving forecasting accuracy. However, the high-order forecasting approach is accompanied by the crucial problem of determining an appropriate order number. Consequently, such a deficiency was recently solved by Li and Cheng [S.-T. Li, Y.-C. Cheng, Deterministic Fuzzy time series model for forecasting enrollments, Computers and Mathematics with Applications 53 (2007) 1904-1920] using a deterministic forecasting method. In this paper, we propose a novel forecasting model to enhance forecasting functionality and allow processing of two-factor forecasting problems. In addition, this model applies fuzzy c-means (FCM) clustering to deal with interval partitioning, which takes the nature of data points into account and produces unequal-sized intervals. Furthermore, in order to cope with the randomness of initially assigned membership degrees of FCM clustering, Monte Carlo simulations are used to justify the reliability of the proposed model. The superior accuracy of the proposed model is demonstrated by experiments comparing it to other existing models using real-world empirical data.
A New Approach Of Bivariate Fuzzy Time Series Analysis To The Forecasting Of A Stock Index In recent years, the innovation and improvement of forecasting techniques have caught more and more attention. Especially, in the fields of financial economics, management planning and control, forecasting provides indispensable information in decision-making process. If we merely use the time series with the closing price array to build a forecasting model, a question that arises is: Can the model exhibit the real case honestly? Since, the daily closing price of a stock index is uncertain and indistinct. A decision for biased future trend may result in the danger of huge lost. Moreover, there are many factors that influence daily closing price, such as trading volume and exchange rate, and so on. In this research, we propose a new approach for a bivariate fuzzy time series analysis and forecasting through fuzzy relation equations. An empirical study on closing price and trading volume of a bivariate fuzzy time series model for Taiwan Weighted Stock Index is constructed. The performance of linguistic forecasting and the comparison with the bivariate ARMA model are also illustrated.
A mathematical perspective for software measures research Basic principles which necessarily underlie software measures research are analysed. In the prevailing paradigm for the validation of software measures, there is a fundamental assumption that the sets of measured documents are ordered and that measures should report these orders. The authors describe mathematically, the nature of such orders. Consideration of these orders suggests a hierarchy of software document measures, a methodology for developing new measures and a general approach to the analytical evaluation of measures. They also point out the importance of units for any type of measurement and stress the perils of equating document structure complexity and psychological complexity
Machine Learning This exciting addition to the McGraw-Hill Series in Computer Science focuses on the concepts and techniques that contribute to the rapidly changing field of machine learning--including probability and statistics, artificial intelligence, and neural networks--unifying them all in a logical and coherent manner. Machine Learning serves as a useful reference tool for software developers and researchers, as well as an outstanding text for college students.Table of contentsChapter 1. IntroductionChapter 2. Concept Learning and the General-to-Specific OrderingChapter 3. Decision Tree LearningChapter 4. Artificial Neural NetworksChapter 5. Evaluating HypothesesChapter 6. Bayesian LearningChapter 7. Computational Learning TheoryChapter 8. Instance-Based LearningChapter 9. Inductive Logic ProgrammingChapter 10. Analytical LearningChapter 11. Combining Inductive and Analytical LearningChapter 12. Reinforcement Learning.
Goal-Based Requirements Analysis Goals are a logical mechanism for identifying, organizing and justifying software requirements. Strategies are needed for the initial identification and construction of goals. In this paper we discuss goals from the perspective of two themes: goal analysis and goal evolution. We begin with an overview of the goal-based method we have developed and summarize our experiences in applying our method to a relatively large example. We illustrate some of the issues that practitioners face when using a goal-based approach to specify the requirements for a system and close the paper with a discussion of needed future research on goal-based requirements analysis and evolution. Keywords: goal identification, goal elaboration, goal refinement, scenario analysis, requirements engineering, requirements methods
A logic covering undefinedness in program proofs Recursive definition often results in partial functions; iteration gives rise to programs which may fail to terminate for some imputs. Proofs about such functions or programs should be conducted in logical systems which reflect the possibility of \"undefined values\". This paper provides an axiomatization of such a logic together with examples of its use.
Design And Implementation Of A Low Complexity Lossless Video Codec A low complexity loss less video codec design, which is an extension to the well known CALIC system, is presented. It starts with a reversible color space transform to decorrelate the video signal components. A gradient-adjusted prediction scheme facilitated with an error feedback mechanism follows to obtain the prediction value for each pixel. Finally, an adaptive Golomb-Rice coding scheme in conjunction with a context modeling technique to determine the K value adaptively is applied to encode the prediction errors faithfully. The proposed scheme exhibits a compression performance comparable to that of CALIC but with almost one half computing complexity. It also outperforms other well recognized schemes such as JPEG-LS scheme by 12% in compression ratio. A chip design using TSMC 0.18um technology was also developed. It features a throughput rate of 13.83 M pixels per second and a design gate count of 35k plus 3.7kB memory.
Expressing the relationships between multiple views in requirements specification The authors generalize and formalize the definition of a ViewPoint to facilitate its manipulation for composite system development. A ViewPoint is defined to be a loosely-coupled, locally managed object encapsulating representation knowledge, development process knowledge and partial specification knowledge about a system and its domain. In attempting to integrate multiple requirements specification ViewPoints, overlaps must be identified and expressed, complementary participants made to interact and cooperate, and contradictions resolved. The notion of inter-ViewPoint communication is addressed as a vehicle for ViewPoint integration. The communication model presented straddles both the method construction stage during which inter-ViewPoint relationships are expressed, and the method application stage during which these relationships are enacted
Verifying task-based specifications in conceptual graphs A conceptual model is a model of real world concepts and application domains as perceived by users and developers. It helps developers investigate and represent the semantics of the problem domain, as well as communicate among themselves and with users. In this paper, we propose the use of task-based specifications in conceptual graphs (TBCG) to construct and verify a conceptual model. Task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in the conceptual model; whereas conceptual graphs are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of verification. Verifying a conceptual model is performed on model specifications of a task through constraints satisfaction and relaxation techniques, and on process specifications of the task based on operators and rules of inference inherited in conceptual graphs.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.066836
0.048667
0.047133
0.043213
0.036207
0.017677
0
0
0
0
0
0
0
0
The Composition of Event-B Models The transition from classical B [2] to the Event-B language and method [3] has seen the removal of some forms of model structuring and composition, with the intention of reinventing them in future. This work contributes to that reinvention. Inspired by a proposed method for state-based decomposition and refinement [5] of an Event-B model, we propose a familiar parallel event composition (over disjoint state variable lists), and the less familiar event fusion (over intersecting state variable lists). A brief motivation is provided for these and other forms of composition of models, in terms of feature-based modelling. We show that model consistency is preserved under such compositions. More significantly we show that model composition preserves refinement.
Alternating simulation and IOCO We propose a symbolic framework called guarded labeled as- signment systems or GLASs and show how GLASs can be used as a foundation for symbolic analysis of various aspects of formal specification languages. We define a notion of i/o- refinement over GLASs as an alternating simulation relation and provide formal proofs that relate i/o-refinement to ioco. We show that non-i/o-refinement reduces to a reachability problem and provide a translation from bounded non-i/o- refinement or bounded non-ioco to checking first-order as- sertions. alternating simulation and show that it is a generalization of ioco for all GLASs, generalizing an earlier result (29) for the deterministic case. The notion of i/o-refinement is essen- tially a compositional version of ioco. We provide a rigorous account for formally dealing with quiescence in GLASs in a way that supports symbolic analysis with or without the presence of quiescence. We also define the notion of a sym- bolic composition of GLASs that generalizes the composition of model programs (31) and respects the standard parallel synchronous composition of LTSs (21, 23) with the interleav- ing semantics of unshared labels. Composition of GLASs is used to show that the i/o-refinement relation between two GLASs can be formulated as an condition of the composite GLAS. This leads to a mapping of the non-i/o-refinement checking problem into a reachability checking problem for a pair of GLASs. For a class of GLASs that we callrobust we can furthermore use established methods developed for ver- ifying safety properties of reactive systems. We show that the non-i/o-refinement checking problem can be reduced to first-order assertion checking by using proof-rules similar to those that have been formulated for checking invariants of reactive systems. It can also be approximated as a bounded model program checking problem or BMPC (30). The practi- cal implications regarding symbolic analysis are not studied in this paper, but lead to a way of applying state-of-the- art satisfiability modulo theories (SMT) technology that are outlined in (30, 29). However, the concrete examples used in the paper are tailored to such analysis and illustrate the use background theories that are supported by state-of-the-art SMT solvers such as Z3 (14).
Event-b decomposition for parallel programs We present here a case study developing a parallel program. The approach that we use combines refinement and decomposition techniques. This involves in the first step to abstractly specify the aim of the program, then subsequently introduce shared information between sub-processes via refinement. Afterwards, decomposition is applied to split the resulting model into sub-models for different processes. These sub-models are later independently developed using refinement. Our approach aids the understanding of parallel programs and reduces the complexity in their proofs of correctness.
Qualitative Action Systems An extension to action systems is presented facilitating the modeling of continuous behavior in the discrete domain. The original action system formalism has been developed by Back et al. in order to describe parallel and distributed computations of discrete systems, i.e. systems with discrete state space and discrete control. In order to cope with hybrid systems, i.e. systems with continuous evolution and discrete control, two extensions have been proposed: hybrid action systems and continuous action systems. Both use differential equations (relations) to describe continuous evolution. Our version of action systems takes an alternative approach by adding a level of abstraction: continuous behavior is modeled by Qualitative Differential Equations that are the preferred choice when it comes to specifying abstract and possibly non-deterministic requirements of continuous behavior. Because their solutions are transition systems, all evolutions in our qualitative action systems are discrete. Based on hybrid action systems, we develop a new theory of qualitative action systems and discuss how we have applied such models in the context of automated test-case generation for hybrid systems.
Conjunction as composition Partial specifications written in many different specification languages can be composed if they are all given semantics in the same domain, or alternatively, all translated into a common style of predicate logic. The common semantic domain must be very general, the particular semantics assigned to each specification language must be conducive to composition, and there must be some means of communication that enables specifications to build on one another. The criteria for success are that a wide variety of specification languages should be accommodated, there should be no restrictions on where boundaries between languages can be placed, and intuitive expectations of the specifier should be met.
Stepwise refinement of parallel algorithms The refinement calculus and the action system formalism are combined to provide a uniform method for constructing parallel and distributed algorithms by stepwise refinement. It is shown that the sequencial refinement calculus can be used as such for most of the derivation steps. Parallelism is introduced during the derivation by refinement of atomicity. The approach is applied to the derivation of a parallel version of the Gaussian elimination method for solving simultaneous linear equation systems.
A Methodology for Developing Distributed Programs A methodology, different from the existing ones, for constructing distributed programs is presented. It is based on the well-known idea of developing distributed programs via synchronous and centralized programs. The distinguishing features of the methodology are: 1) specification include process structure information and distributed programs are developed taking this information into account, 2) a new class of programs, called PPSA's, is used in the development process, and 3) a transformational approach is suggested to solve the problems inherent in the method of developing distributed programs through synchronous and centralized programs. The methodology is illustrated with an example.
Unifying wp and wlp Boolean-valued predicates over a state space are isomorphic to its char- acteristic functions into {0,1}. Enlarging that range to { 1,0,1} allows the definition of extended predicates whose associated transformers gen- eralise the conventional wp and wlp. The correspondingly extended healthiness conditions include the new 'sub-additivity', an arithmetic inequality over predicates. Keywords: Formal semantics, program correctness, weakest precon- dition, weakest liberal precondition, Egli-Milner order.
A generalization of Dijkstra's calculus Dijsktra's calculus of guarded commands can be generalized and simplified by dropping the law of the excluded miracle. This paper gives a self-contained account of the generalized calculus from first principles through the semantics of recursion. The treatment of recursion uses the fixpoint method from denotational semantics. The paper relies only on the algebraic properties of predicates; individual states are not mentioned (except for motivation). To achieve this, we apply the correspondence between programs and predicates that underlies predicative programming.The paper is written from the axiomatic semantic point of view, but its contents can be described from the denotational semantic point of view roughly as follows: The Plotkin-Apt correspondence between wp semantics and the Smyth powerdomain is extended to a correspondence between the full wp/wlp semantics and the Plotkin powerdomain extended with the empty set.
Formal methods: state of the art and future directions ing with credit is permitted. To copy otherwise, to republish, to post onservers, to redistribute to lists, or to use any component of this work in other works, requires priorspecific permission and/or a fee. Permissions may be requested from Publications Dept, ACMInc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or [email protected] \Delta E.M. Clarke and J.M. WingAbout Programs---Mechanical verification, Specification techniques; F.4.1 [Mathematical Logicand...
On the relation between Memon's and the modified Zeng's palette reordering methods Palette reordering has been shown to be a very effective approach for improving the compression of color-indexed images by general purpose continuous-tone image coding techniques. In this paper, we provide a comparison, both theoretical and experimental, of two of these methods: the pairwise merging heuristic proposed by Memon et al. and the recently proposed modification of Zeng's method. This analysis shows how several parts of the algorithms relate and how their performance is affected by some modifications. Moreover, we show that Memon's method can be viewed as an extension of the modified version of Zeng's technique and, therefore, that the modified Zeng's method can be obtained through some simplifications of Memon's method.
Compact and localized distributed data structures This survey concerns the role of data structures for compactly storing and representing various types of information in a localized and distributed fashion. Traditional approaches to data representation are based on global data structures, which require access to the entire structure even if the sought information involves only a small and local set of entities. In contrast, localized data representation schemes are based on breaking the information into small local pieces, or labels, selected in a way that allows one to infer information regarding a small set of entities directly from their labels, without using any additional (global) information. The survey concentrates mainly on combinatorial and algorithmic techniques, such as adjacency and distance labeling schemes and interval schemes for routing, and covers complexity results on various applications, focusing on compact localized schemes for message routing in communication networks.
Miro: Visual Specification of Security Miro is a set of languages and tools that support the visual specification of file system security. Two visual languages are presented: the instance language, which allows specification of file system access, and the constraint language, which allows specification of security policies. Miro visual languages and tools are used to specify security configurations. A visual language is one whose entities are graphical, such as boxes and arrows, specifying means stating independently of any implementation the desired properties of a system. Security means file system protection: ensuring that files are protected from unauthorized access and granting privileges to some users, but not others. Tools implemented and examples of how these languages can be applied to real security specification problems are described.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.0781
0.0624
0.035714
0.0156
0.006757
0.000952
0.000016
0.000003
0
0
0
0
0
0
Are Object-Oriented Concepts Useful to Real-Time Systems Development?
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Procedures and concurrency: A study in proof Without Abstract
A proof rule for process-creation
The cooperation test: a syntax-directed verification method
A Proof System for Communicating Sequential Processes An axiomatic proof system is presented for proving partial correctness and absence of deadlock (and failure) of communicating sequential processes. The key (meta) rule introduces cooperation between proofs, a new concept needed to deal with proofs about synchronization by message passing. CSP's new convention for distributed termination of loops is dealt with. Applications of the method involve correctness proofs for two algorithms, one for distributed partitioning of sets, the other for distributed computation of the greatest common divisor of n numbers.
Script: a communication abstraction mechanism and its verification In this paper, we introduce a new abstraction mechanism, called a script , which hides the low-level details that implement patterns of communication . A script localizes the communication between a set of roles (formal processes), to which actual processes enroll to participate in the action of the script. The paper discusses the addition of scripts to the languages CSP and ADA, and to a shared-variable language with monitors. Proof rules are presented for proving partial correctness and freedom from deadlock in concurrent programs using scripts.
Programming Concepts, Methods and Calculi, Proceedings of the IFIP TC2/WG2.1/WG2.2/WG2.3 Working Conference on Programming Concepts, Methods and Calculi (PROCOMET '94) San Miniato, Italy, 6-10 June, 1994
Object-oriented specification of reactive systems A novel approach to the operational specification of concurrent systems that leads to an object-oriented specification language is presented. In contrast to object-oriented programming languages, objects are structured as hierarchical state-transition systems, methods of individual objects are replaced by roles in cooperative multiobject actions whereby explicit mechanisms for process communication are avoided, and a simple nondeterministic execution model that requires no explicit invocation of actions is introduced. The approach has a formal basis, and it emphasizes structured derivation of specifications. Top-down and bottom-up methodologies are reflected in two variants of inheritance. The former captures the methodology of designing distributed systems by superimposition; the latter is suited to the specification of reusable modules
Verification of Reactive Systems Using DisCo and PVS
The lattice of data refinement We define a very general notion of data refinement which comprises the traditionalnotion of data refinement as a special case. Using the concepts of duals and adjoints wedefine converse commands and a find a symmetry between ordinary data refinement and adual (backward) data refinement. We show how ordinary and backward data refinementare interpreted as simulation and we derive rules for the piecewise data refinement ofprograms. Our results are valid for a general language, covering...
A validation system for object oriented specifications of information systems In this paper, we present a set of software tools for developing and validating object oriented conceptual models specified in TROLL. TROLL is a formal object-oriented language for modelling information systems on a high level of abstraction. The tools include editors, syntax and consistency checkers as well as an animator which generates executable prototypes from the models on the same level of abstraction. In this way, the model behaviour can be observed and checked against the informal user requirements. After a short introduction in some validation techniques and research questions, we describe briefly the TROLL language as well as its graphical version OMTROLL. We then explain the system architecture and show its functionalities by a simplified example of an industrial application which is called CATC (ComputerAided Testing and Certifying).
A constructive approach to the design of distributed systems The underlying model of distributed systems is that of loosely coupled components r running in parallel and communicating by message passing. Description, construction and evolution of these systems is facilitated by separating the system structure, as a set of components and their interconnections, from the functional description of individual component behaviour. Furthermore, component reuse and structuring flexibility is enhanced if components are context independent ie. self- contained with a well defined interface for component interaction. The Conic environment for distributed programming supports this model. In particular, Conic provides a separate configuration language for the description, construction and evolution of distributed systems. The Conic environment has demonstrated a working environment which supports system distribution, r reconfiguration and extension. We had initially supposed that Conic might pose difficult challenges for us as software designers. For example, what design techniques should we employ to develop a system that exploits the Conic facilities? In fact we have experienced quite the opposite. The principles of explicit system structure and context independent components that underlie Conic have lead us naturally to a design approach which differs from that of both current industrial practice and current research. Our approach is termed "constructive" since it emphasises the satisfaction of system requirements by composition of components. In this paper we describe the approach and illustrate its use by application to an example, a model airport shuttle system which has been implemented in Conic.
Goal-Oriented Requirements Engineering: A Guided Tour Abstract: Goals capture, at different levels of abstraction, the various objectives the system under consideration should achieve. Goal-oriented requirements engineering is concerned with the use of goals for eliciting, elaborating, structuring, specifying, analyzing, negotiating, documenting, and modifying requirements. This area has received increasing attention over the past few years. The paper reviews various research efforts undertaken along this line of research. The arguments in favor of goal orientation are first briefly discussed. The paper then com-pares the main approaches to goal modeling, goal specification and goal-based reasoning in the many activities of the requirements engineering process. To make the discussion more concrete, a real case study is used to suggest what a goal-oriented requirements engineering method may look like. Experience with such approaches and tool support are briefly discussed as well.
Trade-Off Analysis For Requirements Selection Evaluation, prioritization and selection of candidate requirements are of tremendous importance and impact for subsequent software development. Effort, time as well as quality constraints have to be taken into account. Typically, different stakeholders have conflicting priorities and the requirements of all these stakeholders have to be balanced in an appropriate way to ensure maximum value of the final set of requirements. Tradeoff analysis is needed to proactively explore the impact of certain decisions in terms of all the criteria and constraints.The proposed method called Quantitative WinWin uses an evolutionary approach to provide support for requirements negotiations. The novelty of the presented idea is four-fold. Firstly, it iteratively uses the Analytical Hierarchy Process (AHP) for a step-wise analysis with the aim to balance the stakeholders' preferences related to different classes of requirements. Secondly, requirements selection is based on predicting and rebalancing its impact on effort, time and quality. Both prediction and rebalancing uses the simulation model prototype GENSIM. Thirdly, alternative solution sets offered for decision-making are developed incrementally based on thresholds for the degree of importance of requirements and heuristics to find a best fit to constraints. Finally, trade-off analysis is used to determine non-dominated extensions of the maximum value that is achievable under resource and quality constraints. As a main result, quantitative WinWin proposes a small number of possible sets of requirements from which the actual decision-maker can finally select the most appropriate solution.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.074841
0.068019
0.068019
0.015583
0.003844
0.000264
0.000068
0.000025
0.000006
0.000001
0
0
0
0
Coordinating Multi-Agents using JavaSpaces In recent years, multiagent systems have become anew attractive paradigm for developing Internet-basedenterprise applications. In this paper, we investigatethe agent coordination issue of multiagent systems. Weexplore the emerging JavaSpace technology for achievingcoordination within a multi-level supply chain managementenvironment. JavaSpace is a recent realization of theclassic Linda model. The tuple space of the Linda modelprovides a convenient way for agent communication andcoordination. The coordination protocol for the supplychain management system is presented. The protocol isdesigned based on JavaSpace, and is described usingColored Petri Nets. We argue that the emerging JavaSpacestechnology provides a convenient, yet flexible approach toagent coordination in multiagent system environment.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Polynomials-based integral inequality for stability analysis of linear systems with time-varying delays. This paper employs polynomial functions to extend a free-matrix-based integral inequality into a general integral inequality, say a polynomials-based integral inequality, which also contains well-known integral inequalities as special cases. By specially designing slack matrices and an arbitrary vector containing state terms, it reduces to an extended version of Wirtinger-based integral inequality or the free-matrix-based integral inequality. Numerical examples for stability analysis of linear systems with interval time-varying delays show the improved performance of the proposed integral inequality in terms of maximum delay bounds and numbers of variables.
Orthogonal-polynomials-based integral inequality and its applications to systems with additive time-varying delays. Recently, a polynomials-based integral inequality was proposed by extending the Moon’s inequality into a generic formulation. By imposing certain structures on the slack matrices of this integral inequality, this paper proposes an orthogonal-polynomials-based integral inequality which has lower computational burden than the polynomials-based integral inequality while maintaining the same conservatism. Further, this paper provides notes on relations among recent general integral inequalities constructed with arbitrary degree polynomials. In these notes, it is shown that the proposed integral inequality is superior to the Bessel–Legendre (B–L) inequality and the polynomials-based integral inequality in terms of the conservatism and computational burden, respectively. Moreover, the effectiveness of the proposed method is demonstrated by an illustrative example of stability analysis for systems with additive time-varying delays.
Stochastic stability for distributed delay neural networks via augmented Lyapunov-Krasovskii functionals. This paper is concerned with the analysis problem for the globally asymptotic stability of a class of stochastic neural networks with finite or infinite distributed delays. By using the delay decomposition idea, a novel augmented Lyapunov–Krasovskii functional containing double and triple integral terms is constructed, based on which and in combination with the Jensen integral inequalities, a less conservative stability condition is established for stochastic neural networks with infinite distributed delay by means of linear matrix inequalities. As for stochastic neural networks with finite distributed delay, the Wirtinger-based integral inequality is further introduced, together with the augmented Lyapunov–Krasovskii functional, to obtain a more effective stability condition. Finally, several numerical examples demonstrate that our proposed conditions improve typical existing ones.
A generalized free-matrix-based integral inequality for stability analysis of time-varying delay systems. This paper focuses on the delay-dependent stability problem of time-varying delay systems. A generalized free-matrix-based integral inequality (GFMBII) is presented. This inequality is able to deal with time-varying delay systems without using the reciprocal convexity lemma. It overcomes the drawback that the Bessel–Legendre inequality is inconvenient to cope with a time-varying delay system as the resultant bound contains a reciprocal convexity. Through the use of the derived inequality and by constructing a suitable Lyapunov–Krasovskii function (LKF), improved stability criteria are presented in the form of linear matrix inequalities (LMIs). Two numerical examples are carried out to demonstrate that the results outperform the state of the art in the literature.
A Note on Relationship Between Two Classes of Integral Inequalities. This technical note firstly introduces two classes of integral inequalities with and without free matrices, respectively, and points out that they, although in different forms, are actually equivalent in the sense of conservatism, i.e., the two corresponding ones produce the same tight upper bounds. Secondly, the relationship between the method of integral inequalities with free matrices and the free-weighting matrix technique is intensively investigated. It is shown that these two different methods are actually equivalent in assessing the stability of time-delay systems.
Stability of Linear Systems With Time-Varying Delays Using Bessel–Legendre Inequalities This paper addresses the stability problem of linear systems with a time-varying delay. Hierarchical stability conditions based on linear matrix inequalities are obtained from an extensive use of the Bessel inequality applied to Legendre polynomials of arbitrary orders. While this inequality has been only used for constant discrete and distributed delays, this paper generalizes the same methodology to time-varying delays. We take advantages of the dependence of the stability criteria on both the delay and its derivative to propose a new definition of allowable delay sets. A light and smart modification in their definition leads to relevant conclusions on the numerical results.
An overview of recent developments in Lyapunov-Krasovskii functionals and stability criteria for recurrent neural networks with time-varying delays. Global asymptotic stability is an important issue for wide applications of recurrent neural networks with time-varying delays. The Lyapunov–Krasovskii functional method is a powerful tool to check the global asymptotic stability of a delayed recurrent neural network. When the Lyapunov–Krasovskii functional method is employed, three steps are necessary in order to derive a global asymptotic stability criterion: (i) constructing a Lyapunov–Krasovskii functional, (ii) estimating the derivative of the Lyapunov–Krasovskii functional, and (iii) formulating a global asymptotic stability criterion. This paper provides an overview of recent developments in each step with insightful understanding. In the first step, some existing Lyapunov–Krasovskii functionals for stability of delayed recurrent neural networks are anatomized. In the second step, a free-weighting matrix approach, an integral inequality approach and its recent developments, reciprocally convex inequalities and S-procedure are analyzed in detail. In the third step, linear convex and quadratic convex approaches, together with the refinement of allowable delay sets are reviewed. Finally, some challenging issues are presented to guide the future research.
Stability and Stabilization of T-S Fuzzy Systems With Time-Varying Delays via Delay-Product-Type Functional Method. This paper is concerned with the stability and stabilization problems of T-S fuzzy systems with time-varying delays. The purpose is to develop a new state-feedback controller design method with less conservatism. First, a novel Lyapunov-Krasovskii functional is constructed by combining delay-product-type functional method together with the state vector augmentation. By utilizing Wirtinger-based in...
Robust sampled-data stabilization of linear systems: an input delay approach A new approach to robust sampled-data control is introduced. The system is modelled as a continuous-time one, where the control input has a piecewise-continuous delay. Sufficient linear matrix inequalities (LMIs) conditions for sampled-data state-feedback stabilization of such systems are derived via descriptor approach to time-delay systems. The only restriction on the sampling is that the distance between the sequel sampling times is not greater than some prechosen h>0 for which the LMIs are feasible. For h→0 the conditions coincide with the necessary and sufficient conditions for continuous-time state-feedback stabilization. Our approach is applied to two problems: to sampled-data stabilization of systems with polytopic type uncertainities and to regional stabilization by sampled-data saturated state-feedback.
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Motivation: When running experiments that involve multiple high density oligonucleotide arrays, it is important to remove sources of variation between arrays of non-biological origin. Normalization is a process for reducing this variation. It is common to see non-linear relations between arrays and the standard normalization provided by Affymetrix does not perform well in these situations. Results: We present three methods of performing normalization at the probe intensity level. These methods are called complete data methods because they make use of data from all arrays in an experiment to form the normalizing relation. These algorithms are compared to two methods that make use of a baseline array: a one number scaling based algorithm and a method that uses a non-linear normalizing relation by comparing the variability and bias of an expression measure. Two publicly available datasets are used to carry out the comparisons. The simplest and quickest complete data method is found to perform favorably. Availablity: Software implementing all three of the complete data normalization methods is available as part of the R package Affy, which is a part of the Bioconductor project http://www.bioconductor.org. Contact: [email protected] Supplementary information: Additional figures may be found at http://www.stat.berkeley.edu/similar tobolstad/normalize/ index.html.
Network Topology and a Case Study in TCOZ Object-Z is strong in modeling the data and operations of complex systems.However, it is weak in specifying real-time and concurrent systems.The Timed Communicating Object-Z (TCOZ) extends Object-Z notation withTimed CSP's constructs. TCOZ is particularly well suited for specifying complexsystems whose components have their own thread of control. This paperdemonstrates expressiveness of the TCOZ notation through a case study onspecifying a multi-lift system that operates in real-time.1...
Delirium: an embedding coordination language Parallel programs consist of a group of sequentially executing sub-computations which cooperate to solve a problem. To exploit existing sequential code and available optimization tools, programmers usually choose to write these sub-computations in traditional imperative languages such as C and Fortran. A coordination language expresses data exchange and synchronization among such sub-computations. Current coordination languages support a variety of interaction models. Delirium introduces a new, more restrictive coordination model that provides the benefit of deterministic execution without requiring programmers to re-write large amounts of code. Current coordination languages are embedded; they work through the insertion of coordination primitives within a host language. Delirium is the first example of an embedding coordination language. A Delirium program is a compact representation of a framework for accomplishing a task in parallel.
Evolutionary Fuzzy Relational Modeling for Fuzzy Time Series Forecasting The use of fuzzy time series has attracted considerable attention in studies that aim to make forecasts using uncertain information. However, most of the related studies do not use a learning mechanism to extract valuable information from historical data. In this study, we propose an evolutionary fuzzy forecasting model, in which a learning technique for a fuzzy relation matrix is designed to fit the historical data. Taking into consideration the causal relationships among the linguistic terms that are missing in many existing fuzzy time series forecasting models, this method can naturally smooth the defuzzification process, thus obtaining better results than many other fuzzy time series forecasting models, which tend to produce stepwise outcomes. The experimental results with two real datasets and four indicators show that the proposed model achieves a significant improvement in forecasting accuracy compared to earlier models.
1.053333
0.05
0.05
0.02
0.012708
0.005764
0.0025
0.000476
0.000002
0
0
0
0
0
The programmer as navigator: a discourse on program structure This paper takes a new view of the familiar title. It is argued that a computer program may be viewed as a map, and the programmer as a cargo-laden vessel navigating the routes. Two programming case studies are presented to show that, even where the map itself is well structured, hazardous journeys may result from the overloading of the vessel with directional cargo.
Further comments on the premature loop exit problem
A bidirectional data driven Lisp engine for the direct execution of Lisp in parallel
Qlisp: Parallel Processing in Lisp One of the major problems in converting serial programs to take advantage of parallel processing has been the lack of a multiprocessing language that is both powerful and understandable to programmers. The authors describe multiprocessing extensions to Common Lisp designed to be suitable for studying styles of parallel programming at the medium-grain level in a shared-memory architecture. The resulting language is called Qlisp. Two features for addressing synchronization problems are included in Qlisp. The first is the concept of heavyweight features, and the second is a novel type of function called a partially multiply invoked function. An initial implementation of Qlisp has been carried out, and various experiments performed. Results to date indicate that its performance is about as good as expected
History of LISP This paper concentrates on the development of the basic ideas and distinguishes two periods - Summer 1958 through Summer 1958 when most of the key ideas were developed (some of which were implemented in the FORTRAN based FLPL), and Fall 1958 through 1962 when the programming language was implemented and applied to problems of artificial intelligence. After 1962, the development of LISP became multi-stranded, and different ideas were pursued in different places.
Exception Handling in Multilisp
List processing in real time on a serial computer A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical implementations of list processing systems lack this property because allocating a list cell from the heap may cause a garbage collection, which process requires time proportional to the heap size to finish. A real-time list processing system is presented which continuously reclaims garbage, including directed cycles, while linearizing and compacting the accessible cells into contiguous locations to avoid fragmenting the free storage pool. The program is small and requires no time-sharing interrupts, making it suitable for microcode. Finally, the system requires the same average time, and not more than twice the space, of a classical implementation, and those space requirements can be reduced to approximately classical proportions by compact list representation. Arrays of different sizes, a program stack, and hash linking are simple extensions to our system, and reference counting is found to be inferior for many applications.
Algebraic tools for the performance evaluation of discrete event systems In this paper, it is shown that a certain class of Petri nets called event graphs can be represented as linear "time-invariant" flnite-dimensional sys- tems using some particular algebras. This sets the ground on which a theory of these systems can be developped in a manner which is very analogous to that of conventional linear system theory. Part 2 of the paper is devoted to showing some preliminary basic developments in that direction. Indeed, there are several ways in which one can consider event graphs as linear sys- tems: these ways correspond to approaches in the time domain, in the event domain and in a two-dimensional domain. In each of these approaches, a difierent algebra has to be used for models to remain linear. However, the common feature of these algebras is that they all fall into the axiomatic deflnition of "dioids". Therefore, Part 1 of the paper is devoted to a unifled presentation of basic algebraic results on dioids.
The wire-tap channel We consider the situation in which digital data is to be reliably transmitted over a discrete, memoryless channel (dmc) that is subjected to a wire-tap at the receiver. We assume that the wire-tapper views the channel output via a second dmc). Encoding by the transmitter and decoding by the receiver are permitted. However, the code books used in these operations are assumed to be known by the wire-tapper. The designer attempts to build the encoder-decoder in such a way as to maximize the transmission rate R, and the equivocation d of the data as seen by the wire-tapper. In this paper, we find the trade-off curve between R and d, assuming essentially perfect (“error-free”) transmission. In particular, if d is equal to Hs, the entropy of the data source, then we consider that the transmission is accomplished in perfect secrecy. Our results imply that there exists a Cs > 0, such that reliable transmission at rates up to Cs is possible in approximately perfect secrecy.
Supporting scenario-based requirements engineering Scenarios have been advocated as a means of improving requirements engineering yet few methods or tools exist to support scenario-based RE. The paper reports a method and software assistant tool for scenario-based RE that integrates with use case approaches to object-oriented development. The method and operation of the tool are illustrated with a financial system case study. Scenarios are used to represent paths of possible behavior through a use case, and these are investigated to elaborate requirements. The method commences by acquisition and modeling of a use case. The use case is then compared with a library of abstract models that represent different application classes. Each model is associated with a set of generic requirements for its class, hence, by identifying the class(es) to which the use case belongs, generic requirements can be reused. Scenario paths are automatically generated from use cases, then exception types are applied to normal event sequences to suggest possible abnormal events resulting from human error. Generic requirements are also attached to exceptions to suggest possible ways of dealing with human error and other types of system failure. Scenarios are validated by rule-based frames which detect problematic event patterns. The tool suggests appropriate generic requirements to deal with the problems encountered. The paper concludes with a review of related work and a discussion of the prospects for scenario-based RE methods and tools.
Design and analysis of high-throughput lossless image compression engine using VLSI-oriented FELICS algorithm In this paper, the VLSI-oriented fast, efficient, lossless image compression system (FELICS) algorithm, which consists of simplified adjusted binary code and Golomb-Rice code with storage-less k parameter selection, is proposed to provide the lossless compression method for high-throughput applications. The simplified adjusted binary code reduces the number of arithmetic operation and improves processing speed. According to theoretical analysis, the storage-less k parameter selection applies a fixed k value in Golomb-Rice code to remove data dependency and extra storage for cumulation table. Besides, the color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Based on VLSI-oriented FELICS algorithm, the proposed hardware architecture features compactly regular data flow, and two-level parallelism with four-stage pipelining is adopted as the framework of the proposed architecture. The chip is fabricated in TSMC 0.13-µm 1P8M CMOS technology with Artisan cell library. Experiment results reveal that the proposed architecture presents superior performance in parallelism-efficiency and power-efficiency compared with other existing works, which characterize high-speed lossless compression. The maximum throughput can achieve 4.36 Gb/s. Regarding high definition (HD) display applications, our encoding capability can achieve a high-quality specification of full-HD 1080p at 60 Hz with complete red, green, blue color components. Furthermore, with the configuration as the multilevel parallelism, the proposed architecture can be applied to the advanced HD display specifications, which demand huge requirement of throughput.
Abstracto 84: The next generation Programming languages are not an ideal vehicle for expressing algorithms. This paper sketches how a language Abstracto might be developed for “algorithmic expressions” that may be manipulated by the rules of “algorithmics”, quite similar to the manipulation of mathematical expressions in mathematics. Two examples are given of “abstract” algorithmic expressions that are not executable in the ordinary sense, but may be used in the derivation of programs. It appears that the notion of “refinement” may be replaced by a weaker notion for abstract algorithmic expressions, corresponding also to a weaker notion of “weakest precondition”.
Incremental planning using conceptual graphs
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.127136
0.149647
0.149647
0.099765
0.039658
0.003119
0.000131
0
0
0
0
0
0
0
Optimizing Agile Processes by Early Identification of Hidden Requirements In recent years, Agile methodologies have increased their relevance in software development, through the application of different testing techniques like unit or acceptance testing. Tests play in agile methodologies a similar role that in waterfall process models: check conformance. Nevertheless the scenario is not the same The contribution of this paper is to explain how the process can be modified to do early identification of hidden requirements (HR) using testing techniques in agile methodologies, specifically using failed tests. The result is an optimized agile process where it may be possible to reach the desired level of functionality in less iterations, but with a similar level of quality. Furthermore it might be necessary to re-think process elements role, e.g. tests, in the Agile context not assuming waterfall definition and scope.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Improved results on stability and stabilization criteria for uncertain linear systems with time-varying delays AbstractIn this paper, the problems of stability and stabilization for linear systems with time-varying delays and norm-bounded parameter uncertainties are considered. By constructing augmented Lyapunov functionals and utilizing auxiliary function-based integral inequalities, improved delay-dependent stability and stabilization criteria for guaranteeing the asymptotic stability of the system are proposed with the framework of linear matrix inequalities. Four numerical examples are included to show that the proposed results can reduce the conservatism of stability and stabilization criteria by comparing maximum delay bounds.
Stability analysis of linear systems with interval time-varying delays utilizing multiple integral inequalities. This paper is devoted to stability analysis of continuous-time delay systems with interval time-varying delays having known bounds on the delay derivatives. A parameterized family of LyapunovKrasovskii functionals involving multiple integral terms is introduced, and novel multiple integral inequalities are utilized to derive sufficient stability condition for systems with time-varying delays. The efficiency of the proposed method is illustrated by numerical examples.
Complete quadratic Lyapunov functionals for distributed delay systems This paper is concerned with the stability analysis of distributed delay systems using complete-Lyapunov functionals. Numerous articles aim at approximating their parameters thanks to a discretization method or polynomial modeling. The interest of such approximations is the design of tractable sufficient stability conditions. In the present article, we provide an alternative method based on polynomial approximation which takes advantages of the Legendre polynomials and their properties. The resulting stability conditions are scalable with respect to the maximum degree of the polynomials and are expressed in terms of tractable linear matrix inequalities. Several examples of delayed systems are tested to show the effectiveness of the method.
Exponential stability of time-delay systems via new weighted integral inequalities. In this paper, new weighted integral inequalities (WIIs) are first derived based on Jensen's integral inequalities in single and double forms. It is theoretically shown that the newly derived inequalities in this paper encompass both the Jensen inequality and its most recent improvement based on Wirtinger's integral inequality. The potential capability of WIIs is demonstrated through applications to exponential stability analysis of some classes of time-delay systems in the framework of linear matrix inequalities (LMIs). The effectiveness and least conservativeness of the derived stability conditions using WIIs are shown by various numerical examples.
On stability criteria for neural networks with time-varying delay using Wirtinger-based multiple integral inequality This paper investigates the problem of delay-dependent stability analysis of neural networks with time-varying delay. Based on Wirtinger-based integral inequality which suggests very closed lower bound of Jensen's inequality, a new Wirtinger-based multiple integral inequality is presented and it is applied to time-varying delayed neural networks by using reciprocally convex combination approach of high order cases. Three numerical examples are given to describe the less conservatism of the proposed methods.
Stabilization of Delay Systems: Delay-dependent Impulsive Control The stabilization problem of delay systems is studied under the delay-dependent impulsive control. The main contributions of this paper are that, for one thing, it shows that time delays in impulse term may contribute to the stabilization of delay systems, that is, a control strategy which does not work without delay feedback in impulse term can be activated to stabilize some unstable delay systems if there exist some time delay feedbacks; for another, it shows the robustness of impulsive control, that is, the designed control strategy admits the existence of some time delays in impulse term which may do harm to the stabilization. In this paper, from impulsive control point of view we firstly propose an impulsive delay inequality. Then we apply it to the delay systems which may be originally unstable, and derive some delaydependent impulsive control criteria to ensure the stabilization of the addressed systems. The effectiveness of the proposed strategy is evidenced by two illustrative examples.
Fuzzy-model-based admissibility analysis and output feedback control for nonlinear discrete-time systems with time-varying delay. This paper is concerned with the admissibility analysis and stabilization problems for singular fuzzy discrete-time systems with time-varying delay. The novelty of this paper comes from the consideration of a new summation inequality which is less conservative than the usual Jensen inequality, the Abel-Lemma based inequality and the Seuret inequality. Based on the inequality, sufficient conditions are established to ensure the systems to be admissible. Moreover, the corresponding conditions for the existence of desired static output feedback controller gains are derived to guarantee that the closed-loop system is admissible. The conditions can be solved by a modified cone complementarity linearization (CCL) algorithm. Examples are given to show the effectiveness of the proposed method.
A neutral system approach to stability of singular time-delay systems This paper is concerned with the problem of delay-dependent stability for a class of singular time-delay systems. By representing the singular system as a neutral form, using an augmented Lyapunov–Krasovskii functional and the Wirtinger-based integral inequality method, we obtain a new stability criterion in terms of a linear matrix inequality (LMI). The criterion is applicable for the stability test of both singular time-delay systems and neutral systems with constant time delays. Illustrative examples show the effectiveness and merits of the method.
Asynchronous Output-Feedback Control of Networked Nonlinear Systems With Multiple Packet Dropouts: T–S Fuzzy Affine Model-Based Approach This paper investigates the problem of robust output-feedback control for a class of networked nonlinear systems with multiple packet dropouts. The nonlinear plant is represented by Takagi-Sugeno (T-S) fuzzy affine dynamic models with norm-bounded uncertainties, and stochastic variables that satisfy the Bernoulli random binary distribution are adopted to characterize the data-missing phenomenon. The objective is to design an admissible output-feedback controller that guarantees the stochastic stability of the resulting closed-loop system with a prescribed disturbance attenuation level. It is assumed that the plant premise variables, which are often the state variables or their functions, are not measurable so that the controller implementation with state-space partition may not be synchronous with the state trajectories of the plant. Based on a piecewise quadratic Lyapunov function combined with an S-procedure and some matrix inequality convexifying techniques, two different approaches to robust output-feedback controller design are developed for the underlying T-S fuzzy affine systems with unreliable communication links. The solutions to the problem are formulated in the form of linear matrix inequalities (LMIs). Finally, simulation examples are provided to illustrate the effectiveness of the proposed approaches.
A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section.
UniProt Knowledgebase: a hub of integrated protein data. The UniProt Knowledgebase (UniProtKB) acts as a central hub of protein knowledge by providing a unified view of protein sequence and functional information. Manual and automatic annotation procedures are used to add data directly to the database while extensive cross-referencing to more than 120 external databases provides access to additional relevant information in more specialized data collections. UniProtKB also integrates a range of data from other resources. All information is attributed to its original source, allowing users to trace the provenance of all data. The UniProt Consortium is committed to using and promoting common data exchange formats and technologies, and UniProtKB data is made freely available in a range of formats to facilitate integration with other databases.
A Compositional Real-Time Semantics of STATEMATE Designs
Verification of Reactive Systems Using DisCo and PVS
Consensus with guaranteed convergence rate of high-order integrator agents in the presence of time-varying delays This paper aims to study the consensus problem in directed networks of agents with high-order integrator dynamics and fixed topology. It is considered the existence of non-uniform time-varying delays in the agents control laws for each interaction between agents and their neighbours. Based on Lyapunov–Krasovskii stability theory and algebraic graph theory, sufficient conditions, in terms of linear matrix inequalities, are given to verify if consensus is achieved with guaranteed exponential convergence rate. The efficiency of the proposed method is verified by numerical simulations. The simulations reveal that the conditions established in this work outperformed the similar existing ones in all numerical tests accomplished in this paper.
1.211181
0.042247
0.026968
0.021505
0.011628
0.00221
0.000333
0.000125
0.000012
0
0
0
0
0
Improved Results on Guaranteed Generalized \({\mathcal {H}}_{2}\) Performance State Estimation for Delayed Static Neural Networks. This paper is concerned with the guaranteed generalized $${\\mathcal {H}}_{2}$$H2 performance state estimation for a class of static neural networks with a time-varying delay. A more general Arcak-type state estimator rather than the Luenberger-type state estimator is adopted to deal with this problem. Based on the Lyapunov stability theory, the inequality techniques and the delay-partitioning approach, some novel delay-dependent design criteria in terms of linear matrix inequalities (LMIs) are proposed ensuring that the resulting error system is globally asymptotically stable and a prescribed generalized $${\\mathcal {H}}_{2}$$H2 performance is guaranteed. The estimator gain matrices can be derived by solving the LMIs. Compared with the existing results, the sufficient conditions presented in this paper are with less conservatism. Numerical examples are given to illustrate the effectiveness and superiority of the developed method over the existing approaches. A comparison between the Arcak-type state estimator and Luenberger-type state estimator is given simultaneously.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0