Query Text
stringlengths
9
8.71k
Ranking 1
stringlengths
14
5.31k
Ranking 2
stringlengths
11
5.31k
Ranking 3
stringlengths
11
8.42k
Ranking 4
stringlengths
17
8.71k
Ranking 5
stringlengths
14
4.95k
Ranking 6
stringlengths
14
8.42k
Ranking 7
stringlengths
17
8.42k
Ranking 8
stringlengths
10
5.31k
Ranking 9
stringlengths
9
8.42k
Ranking 10
stringlengths
9
8.42k
Ranking 11
stringlengths
10
4.11k
Ranking 12
stringlengths
14
8.33k
Ranking 13
stringlengths
17
3.82k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.1
score_8
float64
0
0.02
score_9
float64
0
0
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
On the study of lossless compression of computer generated compound images This paper studies the problem of lossless compression of computer generated compound images that contain not only photographic images but also text and graphic images. We present a simple backward adaptive classification scheme to separate the image source into three classes: smooth regions, text regions and image regions. Different probability models are assigned within each class to maximize the compression performance. We also extend our scheme to exploit the interplane dependency for coding color images. The segmentation results of the reference color plane are used as the contexts for the classification and coding of the current color plane. Our new lossless coder significantly outperforms current state-of-the-art coders such as CALIC and JPEG-LS for compound images with modest computational complexity
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Event-Triggered Generalized Dissipativity Filtering for Neural Networks With Time-Varying Delays This paper is concerned with event-triggered generalized dissipativity filtering for a neural network (NN) with a time-varying delay. The signal transmission from the NN to its filter is completed through a communication channel. It is assumed that the network measurement of the NN is sampled periodically. An event-triggered communication scheme is introduced to design a suitable filter such that precious communication resources can be saved significantly while certain filtering performance can be ensured. On the one hand, the event-triggered communication scheme is devised to select only those sampled signals violating a certain threshold to be transmitted, which directly leads to saving of precious communication resources. On the other hand, the filtering error system is modeled as a time-delay system closely dependent on the parameters of the event-triggered scheme. Based on this model, a suitable filter is designed such that certain filtering performance can be ensured, provided that a set of linear matrix inequalities are satisfied. Furthermore, since a generalized dissipativity performance index is introduced, several kinds of event-triggered filtering issues, such as H∞ filtering, passive filtering, mixed H∞ and passive filtering, (Q,S,R)-dissipative filtering, and L2-L∞ filtering, are solved in a unified framework. Finally, two examples are given to illustrate the effectiveness of the proposed method.
Observer-Based Event-Triggering Consensus Control for Multiagent Systems With Lossy Sensors and Cyber-Attacks. In this paper, the observer-based event-triggering consensus control problem is investigated for a class of discrete-time multiagent systems with lossy sensors and cyber-attacks. A novel distributed observer is proposed to estimate the relative full states and the estimated states are then used in the feedback protocol in order to achieve the overall consensus. An event-triggered mechanism with st...
Exponential stabilization of neural networks with time-varying delay by periodically intermittent control. This paper investigates the exponential stabilization of neural networks with time-varying delay by periodically intermittent control. By employing the free-matrix-based integral inequality and using some new analysis techniques, some novel exponential stabilization criteria are derived based on the Lyapunov-Krasovskii (L-K) functional method. The obtained criteria are in terms of linear matrix inequalities without transcendental equation, instead of nonlinear matrix inequalities, which reduces the computational burden. Compared to existing results in corresponding literatures, our results have a wider range of applications, and overcome no feasible solution if the information on the sizes of delays is ignored for the design of the intermittent controller. A numerical simulation is provided to show the effectiveness and the benefits of the theoretical results.
Two novel general summation inequalities to discrete-time systems with time-varying delay. This paper presents two novel general summation inequalities, respectively, in the upper and lower discrete regions. Thanks to the orthogonal polynomials defined in different inner spaces, various concrete single/multiple summation inequalities are obtained from the two general summation inequalities, which include almost all of the existing summation inequalities, e.g., the Jensen, the Wirtinger-based and the auxiliary function-based summation inequalities. Based on the new summation inequalities, a less conservative stability condition is derived for discrete-time systems with time-varying delay. Numerical examples are given to show the effectiveness of the proposed approach.
Auxiliary function-based summation inequalities and their applications to discrete-time systems. Auxiliary function-based summation inequalities are addressed in this technical note. By constructing appropriate auxiliary functions, several new summation inequalities are obtained. A novel sufficient criterion for asymptotic stability of discrete-time systems with time-varying delay is obtained in terms of linear matrix inequalities. The advantage of the proposed method is demonstrated by two classical examples from the literature.
Neuronal State Estimation for Neural Networks With Two Additive Time-Varying Delay Components. This paper is concerned with the state estimation for neural networks with two additive time-varying delay components. Three cases of these two time-varying delays are fully considered: 1) both delays are differentiable uniformly bounded with delay-derivative bounded by some constants; 2) one delay is continuous uniformly bounded while the other is differentiable uniformly bounded with delay-deriv...
Discrete Wirtinger-based inequality and its application In this paper, we derive a new inequality, which encompasses the discrete Jensen inequality. The new inequality is applied to analyze stability of linear discrete systems with an interval time-varying delay and a less conservative stability condition is obtained. Two numerical examples are given to show the effectiveness of the obtained stability condition.
Discrete inequalities based on multiple auxiliary functions and their applications to stability analysis of time-delay systems This paper presents new discrete inequalities for single summation and double summation. These inequalities are based on multiple auxiliary functions and include the Jensen discrete inequality and the discrete Wirtinger-based inequality as special cases. An application of these discrete inequalities to analyze stability of linear discrete systems with an interval time-varying delay is studied and a less conservative stability condition is obtained. Three numerical examples are given to show the effectiveness of the obtained stability condition.
New delay-dependent stability criteria for T--S fuzzy systems with time-varying delay This paper is concerned with the stability problem of uncertain T-S fuzzy systems with time-varying delay by employing a further improved free-weighting matrix approach. By taking the relationship among the time-varying delay, its upper bound and their difference into account, some less conservative LMI-based delay-dependent stability criteria are obtained without ignoring any useful terms in the derivative of Lyapunov-Krasovskii functional. Finally, two numerical examples are given to demonstrate the effectiveness and the merits of the proposed methods.
Script: a communication abstraction mechanism and its verification In this paper, we introduce a new abstraction mechanism, called a script , which hides the low-level details that implement patterns of communication . A script localizes the communication between a set of roles (formal processes), to which actual processes enroll to participate in the action of the script. The paper discusses the addition of scripts to the languages CSP and ADA, and to a shared-variable language with monitors. Proof rules are presented for proving partial correctness and freedom from deadlock in concurrent programs using scripts.
An extendable approach to computer-aided software requirements engineering The development of system requirements has been recognized as one of the major problems in the process of developing data processing system software. We have developed a computer-aided system for maintaining and analyzing such requirements. This system includes the Requirements Statement Language (RSL), a flow-oriented language for the expression of software requirements, and the Requirements Engineering and Validation System (REVS), a software package which includes a translator for RSL, a data base for maintaining the description of system requirements, and a collection of tools to analyze the information in the data base. The system emphasizes a balance between the use of the creativity of human thought processes and the rigor and thoroughness of computer analysis. To maintain this balance, two key design principles—extensibility and disciplined thinking—were followed throughout the system. Both the language and the software are easily user-extended, but adequate locks are placed on extensions, and limitations are imposed on use, so that discipline is augmented rather than decreased.
Intelligent Clearinghouse: Electronic Marketplace with Computer-mediated Negotiation Supports In this paper, we propose an intelligent clearinghouse system, an electronic marketplace with computer-mediated negotiation supports. Most existing electronic market systems support relatively stable markets: traders are not allowed to revise their bids and offers during the market transaction. The intelligent clearinghouse addresses dynamic markets where buyers and sellers are willing to change their utilities as market conditions evolve. Traders in dynamic markets may suffer a significant loss if they fail to execute transactions promptly. The clearinghouse enables traders to compromise their original utilities to avoid transaction failures. This paper describes the foundation of the clearinghouse system and discusses its trading mechanism, including its order matching method and negotiation support capabilities.
Nonrepetitive colorings of trees A coloring of the vertices of a graph G is nonrepetitive if no path in G forms a sequence consisting of two identical blocks. The minimum number of colors needed is the Thue chromatic number, denoted by @p(G). A famous theorem of Thue asserts that @p(P)=3 for any path P with at least four vertices. In this paper we study the Thue chromatic number of trees. In view of the fact that @p(T) is bounded by 4 in this class we aim to describe the 4-chromatic trees. In particular, we study the 4-critical trees which are minimal with respect to this property. Though there are many trees T with @p(T)=4 we show that any of them has a sufficiently large subdivision H such that @p(H)=3. The proof relies on Thue sequences with additional properties involving palindromic words. We also investigate nonrepetitive edge colorings of trees. By a similar argument we prove that any tree has a subdivision which can be edge-colored by at most @D+1 colors without repetitions on paths.
Complete LKF approach to stabilization for linear systems with time-varying input delay This paper is concerned with stability analysis and stabilization for linear time-delay systems with interval time-varying input delay. By using an augmented complete Lyapunov–Krasovskii functional (LKF) and introducing appropriate terms in dealing with the positiveness of the LKF, we establish new stability and stabilization criteria in terms of linear matrix inequalities (LMIs). The present method leads to some significant improvements over existing results. Moreover, the main feature of this work lies in that the present results are applicable for time-delay systems with unstable delay-free case. Three numerical examples are given to show the effectiveness and merits of the present results.
1.019222
0.020588
0.018182
0.018182
0.009679
0.004213
0.001319
0.000215
0.000044
0
0
0
0
0
Software caching and computation migration in Olden The goal of the Olden project is to build a system that provides parallelism for general purpose C programs with minimal programmer annotations. We focus on programs using dynamic structures such as trees, lists, and DAGs. We demonstrate that providing both software caching and computation migration can improve the performance of these programs, and provide a compile-time heuristic that selects between them for each pointer dereference. We have implemented a prototype system on the Thinking Machines CM-5. We describe our implementation and report on experiments with ten benchmarks.
Supporting dynamic data structures on distributed-memory machines Compiling for distributed-memory machines has been a very active research area in recent years. Much of this work has concentrated on programs that use arrays as their primary data structures. To date, little work has been done to address the problem of supporting programs that use pointer-based dynamic data structures. The techniques developed for supporting SPMD execution of array-based programs rely on the fact that arrays are statically defined and directly addressable. Recursive data structures do not have these properties, so new techniques must be developed. In this article, we describe an execution model for supporting programs that use pointer-based dynamic data structures. This model uses a simple mechanism for migrating a thread of control based on the layout of heap-allocated data and introduces parallelism using a technique based on futures and lazy task creation. We intend to exploit this execution model using compiler analyses and automatic parallelization techniques. We have implemented a prototype system, which we call Olden, that runs on the Intel iPSC/860 and the Thinking Machines CM-5. We discuss our implementation and report on experiments with five benchmarks.
SPMD execution of programs with dynamic data structures on distributed memory machines A combination of language features and compilation techniques that permits SPMD (single-program multiple-data) execution of programs with pointer-based dynamic data structures is presented. The Distributed Dynamic Pascal (DDP) language, which supports the construction and manipulation of local as well as distributed data structures, is described. The compiler techniques developed translate a sequential DDP program for SPMD execution in which all processors are provided with the same program but each processor executes only that part of the program which operates on the elements of the distributed data structures local to the processor. Therefore, the parallelism implicit in a sequential program is exploited. An approach for implementing pointers that is based on the generation of names for the nodes in a dynamic data structure is presented. The name-based strategy makes possible the dynamic distribution of data structures among the processors as well as the traversal of distributed data structures without interprocessor communication
Distributed data structures in Linda A distributed data structure is a data structure that can be manipulated by many parallel processes simultaneously. Distributed data structures are the natural complement to parallel program structures, where a parallel program (for our purposes) is one that is made up of many simultaneously active, communicating processes. Distributed data structures are impossible in most parallel programming languages, but they are supported in the parallel language Linda and they are central to Linda programming style. We outline Linda, then discuss some distributed data structures that have arisen in Linda programming experiments to date. Our intent is neither to discuss the design of the Linda system nor the performance of Linda programs, though we do comment on both topics; we are concerned instead with a few of the simpler and more basic techniques made possible by a language model that, we argue, is subtly but fundamentally different in its implications from most others.This material is based upon work supported by the National Science Foundation under Grant No. MCS-8303905. Jerry Leichter is supported by a Digital Equipment Corporation Graduate Engineering Education Program fellowship.
Exception Handling in Multilisp
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
I-structures: data structures for parallel computing It is difficult to achieve elegance, efficiency, and parallelism simultaneously in functional programs that manipulate large data structures. We demonstrate this through careful analysis of program examples using three common functional data-structuring approaches-lists using Cons, arrays using Update (both fine-grained operators), and arrays using make-array (a “bulk” operator). We then present I-structure as an alternative and show elegant, efficient, and parallel solutions for the program examples in Id, a language with I-structures. The parallelism in Id is made precise by means of an operational semantics for Id as a parallel reduction system. I-structures make the language nonfunctional, but do not lose determinacy. Finally, we show that even in the context of purely functional languages, I-structures are invaluable for implementing functional data abstractions.
Fuzzy identification of systems and its application to modeling and control
Beyond models and metaphors: visual formalisms in user interface design The user interface has both syntactic functions-supplying commands and arguments to programs-and semantic functions-visually presenting application semantics and supporting problem solving cognition. The authors argue that though both functions are important, it is time to devote more resources to the problems of the semantic interface. Complex problem solving activities, e.g. for design and analysis tasks, benefit from clear visualizations of application semantics in the user interface. Designing the semantic interface requires computational building blocks capable of representing and visually presenting application semantics in a clear, precise way. The authors argue that neither mental models not metaphors provide a basis for designing and implementing such building blocks, but that visual formalisms do. They compare the benefits of mental models, metaphors and visual formalisms as the basis for designing the user interface, with particular attention to the practical solutions each provides to application developers
Foundations of 4Thought 4Thought, a prototype design tool, is based on the notion that design artifacts are complex, formal, mathematical objects that require complementary textual and graphical views to be adequately comprehended. This paper describes the combined use of Entity- Relationship modelling and GraphLog to bridge the textual and graphical views. These techniques are illustrated by an example that is formally specified in Z Notation.
Characterizing plans as a set of constraints—the model—a framework for comparative analysis This paper presents an approach to representing and manipulating plans based on a model of plans as a set of constraints. The <I-N-OVA> model 1 is used to characterise the plan representation used within O-Plan and to relate this work to emerging formal analyses of plans and planning. This synergy of practical and formal approaches can stretch the formal methods to cover realistic plan representations as needed for real problem solving, and can improve the analysis that is possible for production planning systems.<I-N-OVA> is intended to act as a bridge to improve dialogue between a number of communities working on formal planning theories, practical planning systems and systems engineering process management methodologies. It is intended to support new work on automatic manipulation of plans, human communication about plans, principled and reliable acquisition of plan information, and formal reasoning about plans.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.113636
0.093939
0.069798
0.002101
0.000114
0.000018
0
0
0
0
0
0
0
0
Deployment by Construction for Multicore Architectures. In stepwise program development, abstract specifications can be transformed into (parallel) programs which preserve functional correctness. Although tackling bad performance after a program’s deployment may require a costly redesign, deployment decisions are usually made very late in program development. This paper argues for the introduction of deployment decisions as an integrated part of a development-by-construction process: Deployment decisions should be expressed as part of a program’s high-level model and evaluated by how they affect program performance, using metrics at an appropriate level of abstraction. To illustrate such a deployment-by-construction process, we sketch how deployment decisions may be modelled and evaluated, concerning data layout in shared memory for parallel programs targeting shared-memory multicore architectures with caches. For simplicity, we use an abstract metric of data access penalties and simulate data accesses on a memory system which internally ensures data coherency between cores.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Information System Design: An Expert System Approach We present in this paper some aspects of an expert design tool so-called OICSI. The scope of OICSI is to generate the IS conceptual schema from a description of the application domain given with a subset of the french natural language. The tool starts, in a first step with an interpretation of natural language descriptions leading to a descriptive network which corresponds to a first version of the conceptual schema. Then in the second step, OICSI uses design rules in order to complete and transform the descriptive network into a normalized network which describes all the elements of the final conceptual schema. The paper focusses on the second step. It presents the representation and validation rules included in the OICSI knowledge based as a formalization of design rules issued from our own practice and experience of Information System design.
Sofspec - A Pragmatic Approach To Automated Specification Verification This paper describes a system for the automatic verification of commerical application specifications—SOFSPEC. After having established a relationship to the other requirement specification approaches, the user interface and the database schema are presented. The database schema is based on the entity/relationship model and encompasses four entities and six relationships with a varying number of attributes. These are briefly outlined. Then, the paper describes how these entities and relations are checked against one another in order to ascertain the completeness and consistency of the specification before it is finally documented.
The Use of the Entity-Relationship Model as a Schema for Organizing the Data Processing Activities
Management Database Study
A Software Engineering View of Data Base Management This paper examines the field of data base management from the perspective of software engineering. Key topics in software engineering are related to specific activities in data base design and implementation. An attempt is made to show the similarities between steps in the creation of systems involving data bases and other kinds of software systems. It is argued that there is a need to unify thinking about data base systems with other kinds of software systems and tools in order to build high quality systems. The progrming language PLAIN and its programning environment is introduced as a tool for integrating notions of programning languages, data base management, and software engineering.
Teamwork Support in a Knowledge-Based Information Systems Environment Development assistance for interactive database applications (DAIDA) is an experimental environment for the knowledge-assisted development and maintenance of database-intensive information systems from object-oriented requirements and specifications. Within the DAIDA framework, an approach to integrate different tasks encountered in software projects via a conceptual modeling strategy has been developed. Emphasis is put on integrating the semantics of the software development domain with aspects of group work, on social strategies to negotiate problems by argumentation, and on assigning responsibilities for task fulfillment by way of contracting. The implementation of a prototype is demonstrated with a sample session.
Recording the reasons for design decisions We outline a generic model for representing design deliberation and the relation between deliberation and the generation of method-specific artifacts. A design history is regarded as a network consisting of artifacts and deliberation nodes. Artifacts represent specifications or design documents. Deliberation nodes represent issues, alternatives or justifications. Existing artifacts give rise to issues about the evolving design, an alternative is one of several positions that respond to the issue (perhaps calling for the creation or modification of an artifact), and a justification is a statement giving the reasons for and against the related alternative. The model is applied to the development of a text formatter. The example necessitates some tailoring of the generic model to the method adopted in the development, Liskov and Guttag's design method. We discuss the experiment and the method-specific extensions. The example development has been represented in hypertext and as a Prolog database, the two representations being shown to complement each other. We conclude with a discussion of the relation between this model and other work, and the implications for tool support and methods.
Real-time constraints in a rapid prototyping language This paper presents real-time constraints of a prototyping language and some mechanisms for handling these constraints in rapidly prototyping embedded systems. Rapid prototyping of embedded systems can be accomplished using a Computer Aided Prototyping System (CAPS) and its associated Prototyping Language (PSDL) to aid the designer in handling hard real-time constraints. The language models time critical operations with maximum execution times, maximum response times and minimum periods. The mechanisms for expressing timing constraints in PSDL are described along with their meanings relative to a series of hardware models which include multi-processor configurations. We also describe a language construct for specifying the policies governing real-time behavior under overload conditions.
Experience with Formal Methods in Critical Systems Although there are indisputable benefits to society from the introduction of computers into everyday life, some applications are inherently risky. Worldwide, regulatory agencies are examining how to assure safety and security. This study reveals the applicability and limitations of formal methods.
Escaping the software tar pit: model clashes and how to avoid them "No scene from prehistory is quite so vivid as that of the mortal struggles of great beasts in the tar pits… Large system programming has over the past decade been such a tar pit, and many great and powerful beasts have thrashed violently in it…"Everyone seems to have been surprised by the stickiness of the problem, and it is hard to discern the nature of it. But we must try to understand it if we are to solve it."Fred Brooks, 1975Several recent books and reports have confirmed that the software tar pit is at least as hazardous today as it was in 1975. Our research into several classes of models used to guide software development (product models, process models, property models, success models), has convinced us that the concept of model clashes among these classes of models helps explain much of the stickiness of the software tar-pit problem.We have been developing and experimentally evolving an approach called MBASE -- Model-Based (System) Architecting and Software Engineering -- which helps identify and avoid software model clashes. Section 2 of this paper introduces the concept of model clashes, and provides examples of common clashes for each combination of product, process, property, and success model. Sections 3 and 4 introduce the MBASE approach for endowing a software project with a mutually supportive set of models, and illustrate the application of MBASE to an example corporate resource scheduling system. Section 5 summarizes the results of applying the MBASE approach to a family of small digital library projects. Section 6 presents conclusions to date.
Time-Dependent Distributed Systems: Proving Safety, Liveness and Real-Time Properties Most communication protocol systems utilize timers to implement real-time constraints between event occurrences. Such systems are said to betime-dependent if the real-time constraints are crucial to their correct operation. We present a model for specifying and verifying time-dependent distributed systems. We consider networks of processes that communicate with one another by message-passing. Each process has a set of state variables and a set of events. An event is described by a predicate that relates the values of the network's state variables immediately before to their values immediately after the event occurrence. The predicate embodies specifications of both the event's enabling condition and action. Inference rules for both safety and liveness properties are presented. Real-time progress properties can be verified as safety properties. We illustrate with three sliding window data transfer protocols that use modulo-2 sequence numbers. The first protocol operates over channels that only lose messages. It is a time-independent protocol. The second and third protocols operate over channels that lose, reorder, and duplicate messages. For their correct operation, it is necessary that messages in the channels have bounded lifetimes. They are time-dependent protocols.
Refinement in Circus We describe refinement in Circus, a concurrent specification language that integrates imperative CSP, Z, and the refinement calculus. Each Circus process has a state and accompanying actions that define both the internal state transitions and the changes in control flow that occur during execution. We define the meaning of refinement of processes and their actions, and propose a sound data refinement technique for process refinement. Refinement laws for CSP and Z are directly relevant and applicable to Circus, but our focus here is on new laws for processes that integrate state and control. We give some new results about the distribution of data refinement through the combinators of CSP. We illustrate our ideas with the development of a distributed system of cooperating processes from a centralised specification.
Reverse engineering distributed algorithms Distributed systems are difficult for a human being to comprehend, informal reasoning about the many parallel and decentralized activities in these systems is not trustworthy, Therefore formal tools for construction and maintenance of distributed systems are needed, We introduce a formal approach to reverse engineering distributed systems that is based on a technique we call coarsement, The idea is that an implementation is stepwise turned into a high level specification through a number of intermediate coarsement steps that preserve the basic functionality of the implementation, The method gives structure to a distributed algorithm that can now be seen as consisting of a number of layers interacting with each other, Each coarsement step produces one such layer, Furthermore, after the coarsement steps the algorithm is easier to understand and to reason about than the original one due to this layering, We show the practical feasibility of the coarsement approach to reverse engineering by analysing a non-trivial distributed algorithm that maintains the routeing information for message passing among a set of processing nodes in a distributed network.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.068079
0.066752
0.066752
0.066752
0.022296
0.007982
0.002681
0.000409
0.000181
0.000062
0.000002
0
0
0
Design theory for dynamic complexity in information infrastructures: the case of building internet We propose a design theory that tackles dynamic complexity in the design for Information Infrastructures (IIs) defined as a shared, open, heterogeneous and evolving socio-technical system of Information Technology (IT) capabilities. Examples of IIs include the Internet, or industry-wide Electronic Data Interchange (EDI) networks. IIs are recursively composed of other infrastructures, platforms, applications and IT capabilities and controlled by emergent, distributed and episodic forms of control. II's evolutionary dynamics are nonlinear, path dependent and influenced by network effects and unbounded user and designer learning. The proposed theory tackles tensions between two design problems related to the II design: (1) the bootstrap problem: IIs need to meet directly early users’ needs in order to be initiated; and (2) the adaptability problem: local designs need to recognize II's unbounded scale and functional uncertainty. We draw upon Complex Adaptive Systems theory to derive II design rules that address the bootstrap problem by generating early growth through simplicity and usefulness, and the adaptability problem by promoting modular and generative designs. We illustrate these principles by analyzing the history of Internet exegesis.
The brave new world of design requirements: four key principles Despite its undoubted success, Requirements Engineering (RE) needs a better alignment between its research focus and its grounding in practical needs as these needs have changed significantly recently. We explore changes in the environment, targets, and the process of requirements engineering (RE) that influence the nature of fundamental RE questions. Based on these explorations we propose four key principles that underlie current requirements processes: (1) intertwining of requirements with implementation and organizational contexts, (2) dynamic evolution of requirements, (3) architectures as a critical stabilizing force, and (4) high levels of design complexity. We make recommendations to refocus RE research agenda as to meet new challenges based on the review and analysis of these four key themes. We note several managerial and practical implications.
The brave new world of design requirements Despite its success over the last 30 years, the field of Requirements Engineering (RE) is still experiencing fundamental problems that indicate a need for a change of focus to better ground its research on issues underpinning current practices. We posit that these practices have changed significantly in recent years. To this end we explore changes in software system operational environments, targets, and the process of RE. Our explorations include a field study, as well as two workshops that brought together experts from academia and industry. We recognize that these changes influence the nature of central RE research questions. We identify four new principles that underlie contemporary requirements processes, namely: (1) intertwining of requirements with implementation and organizational contexts, (2) dynamic evolution of requirements, (3) emergence of architectures as a critical stabilizing force, and (4) need to recognize unprecedented levels of design complexity. We recommend a re-focus of RE research based on a review and analysis of these four principles, and identify several theoretical and practical implications that flow from this analysis.
Toward reference models for requirements traceability Requirements traceability is intended to ensure continued alignment between stakeholder requirements and various outputs of the system development process. To be useful, traces must be organized according to some modeling framework. Indeed, several such frameworks have been proposed, mostly based on theoretical considerations or analysis of other literature. This paper, in contrast, follows an empirical approach. Focus groups and interviews conducted in 26 major software development organizations demonstrate a wide range of traceability practices with distinct low-end and high-end users of traceability. From these observations, reference models comprising the most important kinds of traceability links for various development tasks have been synthesized. The resulting models have been validated in case studies and are incorporated in a number of traceability tools. A detailed case study on the use of the models is presented. Four kinds of traceability link types are identified and critical issues that must be resolved for implementing each type and potential solutions are discussed. Implications for the design of next-generation traceability methods and tools are discussed and illustrated.
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
Programmers use slices when debugging Computer programmers break apart large programs into smaller coherent pieces. Each of these pieces: functions, subroutines, modules, or abstract datatypes, is usually a contiguous piece of program text. The experiment reported here shows that programmers also routinely break programs into one kind of coherent piece which is not coniguous. When debugging unfamiliar programs programmers use program pieces called slices which are sets of statements related by their flow of data. The statements in a slice are not necessarily textually contiguous, but may be scattered through a program.
List processing in real time on a serial computer A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical implementations of list processing systems lack this property because allocating a list cell from the heap may cause a garbage collection, which process requires time proportional to the heap size to finish. A real-time list processing system is presented which continuously reclaims garbage, including directed cycles, while linearizing and compacting the accessible cells into contiguous locations to avoid fragmenting the free storage pool. The program is small and requires no time-sharing interrupts, making it suitable for microcode. Finally, the system requires the same average time, and not more than twice the space, of a classical implementation, and those space requirements can be reduced to approximately classical proportions by compact list representation. Arrays of different sizes, a program stack, and hash linking are simple extensions to our system, and reference counting is found to be inferior for many applications.
ACE: building interactive graphical applications
Duality in specification languages: a lattice-theoretical approach A very general lattice-based language of commands, based on theprimitive operations of substitution and test for equality, isconstructed. This base language permits unbounded nondeterminism,demonic and angelic nondeterminism. A dual language permitting miraclesis constructed. Combining these two languages yields an extended baselanguage which is complete, in the sense that all monotonic predicatetransformers can be constructed in it. The extended base languageprovides a unifying framework for various specification languages; weshow how two Dijkstra-style specification languages can be embedded init.—Authors' Abstract
Abstract Syntax and Semantics of Visual Languages The effective use of visual languages requires a precise understanding of their meaning. Moreover, it is impossible to prove properties of visual languages like soundness of transformation rules or correctness results without having a formal language definition. Although this sounds obvious, it is surprising that only little work has been done about the semantics of visual languages, and even worse, there is no general framework available for the semantics specification of different visual languages. We present such a framework that is based on a rather general notion of abstract visual syntax. This framework allows a logical as well as a denotational approach to visual semantics, and it facilitates the formal reasoning about visual languages and their properties. We illustrate the concepts of the proposed approach by defining abstract syntax and semantics for the visual languages VEX, Show and Tell and Euler circles. We demonstrate the semantics in action by proving a rule for visual reasoning with Euler circles and by showing the correctness of a Show and Tell program.
Inheritance of proofs The Curry-Howard isomorphism, a fundamental property shared by many type theories, establishes a direct correspondence between programs and proofs. This suggests that the same structuring principles that ease programming should be useful for proving as well. To exploit object-oriented structuring mechanisms for verification, we extend the object-model of Pierce and Turner, based on the higher-order typed X-calculus F less than or equal to(omega), with a logical component. By enriching the (functional) signature of objects with a specification, methods and their correctness proofs are packed together in objects. The uniform treatment of methods and proofs gives rise in a natural way to object-oriented proving principles - including inheritance of proofs, late binding of proofs, and encapsulation of proofs - as analogues to object-oriented programming principles. We have used Lego, a type-theoretic proof checker, to explore the feasibility of this approach. (C) 1998 John Wiley & Sons, Inc.
DOODLE: a visual language for object-oriented databases In this paper we introduce DOODLE, a new visual and declarative language for object-oriented databases. The main principle behind the language is that it is possible to display and query the database with arbitrary pictures. We allow the user to tailor the display of the data to suit the application at hand or her preferences. We want the user-defined visualizations to be stored in the database, and the language to express all kinds of visual manipulations. For extendibility reasons, the language is object-oriented. The semantics of the language is given by a well-known deductive query language for object-oriented databases. We hope that the formal basis of our language will contribute to the theoretical study of database visualizations and visual query languages, a subject that we believe is of great interest, but largely left unexplored.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.1
0.066667
0.012121
0
0
0
0
0
0
0
0
0
0
Using Meta-Modelling and Graph Grammars to Process GPSS Models This paper discusses the benets of combining meta- modelling and graph transformations to automatically gen- erate modelling tools for simulation formalisms. In meta- modelling, formalisms are modelled in their own right at a meta-level within an appropriate meta-formalism. A meta-model processor uses this information to automati- cally generate tools to process ñcreate, edit, check, opti- mize, transform and generate simulators forñ the models in the described formalism. We propose the representa- tion of (meta-)models as graphs, and subsequently specify model manipulations as graph grammars. We also present AToM3, A Tool for Multi-formalism and Meta-Modelling which implements these concepts. As an example, we show how to build a meta-model for the popular process interac- tion discrete event language GPSS in AToM3. From this meta-model, AToM3 automatically generates a visual tool to build GPSS models. We also dene a graph grammar to generate textual code for the HGPSS simulator from the graphically specied GPSS models.
Computer Aided Multi-paradigm Modelling to Process Petri-Nets and Statecharts This paper proposes a Multi-Paradigm approach to the modelling of complex systems. The approach consists of the combination of meta-modelling, multi-formalism modelling, and modelling at multiple levels of abstraction. We implement these concepts in AToM3, A Tool for Multi-formalism, Meta-Modelling. In AToM3, modelling formalisms are modelled in their own right at a meta-level within an appropriate formalism. AToM3 uses the information found in the meta-models to automatically generate tools to process (create, edit, check, optimize, transform and generate simulators for) the models in the described formalism. Model processing is described at a meta-level by means of models in the graph grammar formalism. As an example, meta-models for both syntax and semantics of Statecharts (without hierarchy) and Petri-Nets are presented. This includes a graph grammar modelling the transformation between Statecharts and Petri-Nets.
AToM3: A Tool for Multi-formalism and Meta-modelling This article introduces the combined use of multiformalism modelling and meta-modelling to facilitate computer assisted modelling of complex systems. The approach allows one to model different parts of a system using different formalisms. Models can be automatically converted between formalisms thanks to information found in a Formalism Transformation Graph (FTG), proposed by the authors. To aid in the automatic generation of multi-formalism modelling tools, formalisms are modelled in their own right (at a meta-level) within an appropriate formalism. This has been implemented in the interactive tool AToM3. This tool is used to describe formalisms commonly used in the simulation of dynamical systems, as well as to generate custom tools to process (create, edit, transform, simulate, optimise, ...) models expressed in the corresponding formalism. AToM3 relies on graph rewriting techniques and graph grammars to perform the transformations between formalisms as well as for other tasks, such as code generation and operational semantics specification.
Petri nets: Properties, analysis and applications Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets
On visual formalisms The higraph, a general kind of diagramming object, forms a visual formalism of topological nature. Higraphs are suited for a wide array of applications to databases, knowledge representation, and, most notably, the behavioral specification of complex concurrent systems using the higraph-based language of statecharts.
Management Database Study
Using schematic scenarios to understand user needs Scenan'os are narrative descriptions of interactions between users and proposed Tstems. The concreteness qf scenan'os helps users and designers develop a shared understanding of the proposed .ystem 's jiinctionali~; but concreteness leads to a potentially unbounded number of scenan'os for a system. To help designers develop a limited set of salient scenarios, we propose a schema similar to story schemata. Like ston'es. scenarios have protagonists with goals. they start with background information already in place, and thqv have a point that makes them interesting or tests the rerrder's understanding. The scenario schema provides a structuml fmmework for den'ving scenan'os with slots for such teleological information. Scenarios are derived from a description of the vstem's and the user's goals, and the potential obstacles that block those goals. In this paper, we describe the scenario schema and a method for deriving a set of' salient scenarios. We illustrate how these scenarios can be used in the analysis of user needs for a multi-user oflee application.
Inquiry-Based Requirements Analysis This approach emphasizes pinpointing where and when information needs occur; at its core is the inquiry cycle model, a structure for describing and supporting discussions about system requirements. The authors use a case study to describe the model's conversation metaphor, which follows analysis activities from requirements elicitation and documentation through refinement.
Distributed snapshots: determining global states of distributed systems This paper presents an algorithm by which a process in a distributed system determines a global state of the system during a computation. Many problems in distributed systems can be cast in terms of the problem of detecting global states. For instance, the global state detection algorithm helps to solve an important class of problems: stable property detection. A stable property is one that persists: once a stable property becomes true it remains true thereafter. Examples of stable properties are “computation has terminated,” “ the system is deadlocked” and “all tokens in a token ring have disappeared.” The stable property detection problem is that of devising algorithms to detect a given stable property. Global state detection can also be used for checkpointing.
Unifying execution of imperative and declarative code We present a unified environment for running declarative specifications in the context of an imperative object-Oriented programming language. Specifications are Alloy-like, written in first-order relational logic with transitive closure, and the imperative language is Java. By being able to mix imperative code with executable declarative specifications, the user can easily express constraint problems in place, i.e., in terms of the existing data structures and objects on the heap. After a solution is found, the heap is updated to reflect the solution, so the user can continue to manipulate the program heap in the usual imperative way. We show that this approach is not only convenient, but, for certain problems can also outperform a standard imperative implementation. We also present an optimization technique that allowed us to run our tool on heaps with almost 2000 objects.
Generating Executable Scenarios from Natural Language Bridging the gap between the specification of software requirements and actual execution of the behavior of the specified system has been the target of much research in recent years. We have created a natural language interface, which, for a useful class of systems, yields the automatic production of executable code from structured requirements. In this paper we describe how our method uses static and dynamic grammar for generating live sequence charts (LSCs), that constitute a powerful executable extension of sequence diagrams for reactive systems. We have implemented an automatic translation from controlled natural language requirements into LSCs, and we demonstrate it on two sample reactive systems.
A Weaker Precondition for Loops
Matching conceptual graphs as an aid to requirements re-use The types of knowledge used during requirements acquisition are identified and a tool to aid in this process, ReqColl (Requirements Collector) is introduced. The tool uses conceptual graphs to represent domain concepts and attempts to recognise new concepts through the use of a matching facility. The overall approach to requirements capture is first described and the approach to matching illustrated informally. The detailed procedure for matching conceptual graphs is then given. Finally ReqColl is compared to similar work elsewhere and some future research directions indicated.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.22
0.073333
0.031429
0.008
0.00039
0.000075
0.000034
0.000015
0
0
0
0
0
0
Membership-dependent stability conditions for type-1 and interval type-2 T-S fuzzy systems. This paper presents an idea to simplify and relax the stability conditions of Takagi–Sugeno (T–S) fuzzy systems based on the membership function extrema1. By considering the distribution of membership functions in a unified membership space, a graphical approach is provided to analyze the conservativeness of membership-dependent stability conditions. Membership function extrema are used to construct a simple and tighter convex polyhedron that encloses the membership trajectory and produces less conservative linear matrix inequality (LMI) conditions. The cases of both type-1 and interval type-2 T–S fuzzy systems are considered, and comparison with existing methods is made in the proposed membership vector framework.
Robust Kalman Filtering under Model Perturbations We consider a family of divergence-based minimax approaches to perform robust filtering. The mismodeling budget, or tolerance, is specified at each time increment of the model. More precisely, all possible model increments belong to a ball which is formed by placing a bound on the Tau-divergence family between the actual and the nominal model increment. Then, the robust filter is obtained by minimizing the mean square error according to the least favorable model in that ball. It turns out that the solution is a family of Kalman like filters. Their gain matrix is updated according to a risk sensitive like iteration where the risk sensitivity parameter is now time varying. As a consequence, we also extend the risk sensitive filter to a family of risk sensitive like filters according to the Tau-divergence family.
Actuator and sensor faults estimation based on proportional integral observer for TS fuzzy model. This paper presents a novel method to address a Proportional Integral observer design for the actuator and sensor faults estimation based on Takagi–Sugeno fuzzy model with unmeasurable premise variables. The faults are assumed as time-varying signals whose kth time derivatives are bounded. Using Lyapunov stability theory and L2 performance analysis, sufficient design conditions are developed for simultaneous estimation of states and time-varying actuator and sensor faults. The Proportional Integral observer gains are computed by solving the proposed conditions under Linear Matrix Inequalities constraints. A simulation example is provided to illustrate the effectiveness of the proposed approach.
A novel Lyapunov-Krasovskii functional approach to stability and stabilization for T-S fuzzy systems with time delay. This paper is concerned with the problem of the stability and stabilization for continuous-time Takagi–Sugeno(T–S) fuzzy systems with time delay. A novel Lyapunov–Krasovskii functional which includes fuzzy line-integral Lyapunov functional and membership-function-dependent Lyapunov functional is proposed to investigate stability and stabilization of T–S fuzzy systems with time delay. In addition, switching idea which can avoid time derivative of membership functions is introduced to deal with derivative term. Relaxed Wirtinger inequality is employed to estimate integral cross term. Sufficient stability and stabilization criteria are derived in the form of matrix inequalities which can be solved using the switching idea and LMI method. Several numerical examples are given to demonstrate the advantage and effectiveness of the proposed method by comparing with some recent works.
Feedback error learning control of magnetic satellites using type-2 fuzzy neural networks with elliptic membership functions. A novel type-2 fuzzy membership function (MF) in the form of an ellipse has recently been proposed in literature, the parameters of which that represent uncertainties are de-coupled from its parameters that determine the center and the support. This property has enabled the proposers to make an analytical comparison of the noise rejection capabilities of type-1 fuzzy logic systems with its type-2 counterparts. In this paper, a sliding mode control theory-based learning algorithm is proposed for an interval type-2 fuzzy logic system which benefits from elliptic type-2 fuzzy MFs. The learning is based on the feedback error learning method and not only the stability of the learning is proved but also the stability of the overall system is shown by adding an additional component to the control scheme to ensure robustness. In order to test the efficiency and efficacy of the proposed learning and the control algorithm, the trajectory tracking problem of a magnetic rigid spacecraft is studied. The simulations results show that the proposed control algorithm gives better performance results in terms of a smaller steady state error and a faster transient response as compared to conventional control algorithms.
Wirtinger-based integral inequality: Application to time-delay systems In the last decade, the Jensen inequality has been intensively used in the context of time-delay or sampled-data systems since it is an appropriate tool to derive tractable stability conditions expressed in terms of linear matrix inequalities (LMIs). However, it is also well-known that this inequality introduces an undesirable conservatism in the stability conditions and looking at the literature, reducing this gap is a relevant issue and always an open problem. In this paper, we propose an alternative inequality based on the Fourier Theory, more precisely on the Wirtinger inequalities. It is shown that this resulting inequality encompasses the Jensen one and also leads to tractable LMI conditions. In order to illustrate the potential gain of employing this new inequality with respect to the Jensen one, two applications on time-delay and sampled-data stability analysis are provided.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
The contract net protocol: high-level communication and control in a distributed problem solver The contract net protocol has been developed to specify problem-solving communication and control for nodes in a distributed problem solver. Task distribution is affected by a negotiation process, a discussion carried on between nodes with tasks to be executed and nodes that may be able to execute those tasks.
Stability of time-delay systems: equivalence between Lyapunov and scaled small-gain conditions It is demonstrated that many previously reported Lyapunov- based stability conditions for time-delay systems are equivalent to the ro- bust stability analysis of an uncertain comparison system free of delays via the use of the scaled small-gain lemma with constant scales. The novelty of this note stems from the fact that it unifies several existing stability results under the same framework. In addition, it offers insights on how new, less conservative results can be developed. Index Terms—Stability, time-delay systems.
Metaphors and models: conceptual foundations of representations in interactive systems development When system developers design a computer system (or other information artifact), they must inevitably make judgements as to how to abstract the worksystem and how to represent this abstraction in their designs. In the past, such abstractions have been based either on a traditional philosophy of cognition of cognitive psychology or on intuitive, spontaneous philosophies. A number of recent developments in distributed cognition (Hutchins, 1995), activity theory (Nardi, 1996), and experientialism (Lakoff, 1987) have raised questions about the legitimacy of such philosophies. In this article, we discuss from where the abstractions come that designers employ and how such abstractions are related to the concepts that the users of these systems have. In particular, we use the theory of experientialism or experiential cognition as the foundation for our analysis. Experientialism (Lakoff, 1987) has previously only been applied to human-computer interaction (HCI) design in a quite limited way, yet it deals specifically with issues concerned with categorization and concept formation. We show how the concept of metaphor, derived from experientialism, can be used to understand the strengths and weaknesses of alternative representations in HCI design, how it can highlight changes in the paradigm underlying representations, and how it can be used to consider new approaches to HCI design. We also discuss the role that "mental spaces" have in forming new concepts and designs.
Design with Asynchronously Communicating Components Software oriented methods allow a higher level of abstraction than the often quite low-level hardware design methods used today. We propose a component-based method to organise a large system derivation within the B Method via its facilities as provided by the tools. The designer proceeds from an abstract high-level specification of the intended behaviour of the target system via correctness-preserving transformation steps towards an implementable architecture of library components which communicate asynchronously. At each step a pre-defined component is extracted and the correctness of the step is proved using the tool support of the B Method. We use Action Systems as our formal approach to system design.
It's alive! continuous feedback in UI programming Live programming allows programmers to edit the code of a running program and immediately see the effect of the code changes. This tightening of the traditional edit-compile-run cycle reduces the cognitive gap between program code and execution, improving the learning experience of beginning programmers while boosting the productivity of seasoned ones. Unfortunately, live programming is difficult to realize in practice as imperative languages lack well-defined abstraction boundaries that make live programming responsive or its feedback comprehensible. This paper enables live programming for user interface programming by cleanly separating the rendering and non-rendering aspects of a UI program, allowing the display to be refreshed on a code change without restarting the program. A type and effect system formalizes this separation and provides an evaluation model that incorporates the code update step. By putting live programming on a more formal footing, we hope to enable critical and technical discussion of live programming systems.
Miro: Visual Specification of Security Miro is a set of languages and tools that support the visual specification of file system security. Two visual languages are presented: the instance language, which allows specification of file system access, and the constraint language, which allows specification of security policies. Miro visual languages and tools are used to specify security configurations. A visual language is one whose entities are graphical, such as boxes and arrows, specifying means stating independently of any implementation the desired properties of a system. Security means file system protection: ensuring that files are protected from unauthorized access and granting privileges to some users, but not others. Tools implemented and examples of how these languages can be applied to real security specification problems are described.
MoMut::UML Model-Based Mutation Testing for UML
1.2
0.2
0.1
0.04
0.011111
0.000429
0
0
0
0
0
0
0
0
A comparison of evaluation metrics for document filtering Although document filtering is simple to define, there is a wide range of different evaluation measures that have been proposed in the literature, all of which have been subject to criticism. We present a unified, comparative view of the strenghts and weaknesses of proposed measures based on two formal constraints (which should be satisfied by any suitable evaluation measure) and various properties (which help differentiating measures according to their behaviour). We conclude that (i) some smoothing process is necessary process to satisfy the basic constraints; and (ii) metrics can be grouped into three families, each satisfying one out of three formal properties, which are mutually exclusive, i.e. no metric can satisfy all three properties simultaneously.
Evaluating document clustering for interactive information retrieval We consider the problem of organizing and browsing the top ranked portion of the documents returned by an information retrieval system. We study the effectiveness of a document organization in helping a user to locate the relevant material among the retrieved documents as quickly as possible. In this context we examine a set of clustering algorithms and experimentally show that a clustering of the retrieved documents can be significantly more effective than traditional ranked list approach. We also show that the clustering approach can be as effective as the interactive relevance feedback based on query expansion while retaining an important advantage -- it provides the user with a valuable sense of control over the feedback process.
Simulating simple user behavior for system effectiveness evaluation Information retrieval effectiveness evaluation typically takes one of two forms: batch experiments based on static test collections, or lab studies measuring actual users interacting with a system. Test collection experiments are sometimes viewed as introducing too many simplifying assumptions to accurately predict the usefulness of a system to its users. As a result, there is great interest in creating test collections and measures that better model user behavior. One line of research involves developing measures that include a parameterized user model; choosing a parameter value simulates a particular type of user. We propose that these measures offer an opportunity to more accurately simulate the variance due to user behavior, and thus to analyze system effectiveness to a simulated user population. We introduce a Bayesian procedure for producing sampling distributions from click data, and show how to use statistical tools to quantify the effects of variance due to parameter selection.
A dynamic bayesian network click model for web search ranking As with any application of machine learning, web search ranking requires labeled data. The labels usually come in the form of relevance assessments made by editors. Click logs can also provide an important source of implicit feedback and can be used as a cheap proxy for editorial labels. The main difficulty however comes from the so called position bias - urls appearing in lower positions are less likely to be clicked even if they are relevant. In this paper, we propose a Dynamic Bayesian Network which aims at providing us with unbiased estimation of the relevance from the click logs. Experiments show that the proposed click model outperforms other existing click models in predicting both click-through rate and relevance.
On Clustering Validation Techniques Cluster analysis aims at identifying groups of similar objects and, therefore helps to discover distribution of patterns and interesting correlations in large data sets. It has been subject of wide research since it arises in many application domains in engineering, business and social sciences. Especially, in the last years the availability of huge transactional and experimental data sets and the arising requirements for data mining created needs for clustering algorithms that scale and can be applied in diverse domains.This paper introduces the fundamental concepts of clustering while it surveys the widely known clustering algorithms in a comparative way. Moreover, it addresses an important issue of clustering process regarding the quality assessment of the clustering results. This is also related to the inherent features of the data set under concern. A review of clustering validity measures and approaches available in the literature is presented. Furthermore, the paper illustrates the issues that are under-addressed by the recent algorithms and gives the trends in clustering process.
WePS3 Evaluation Campaign: Overview of the On-line Reputation Management Task This paper summarizes the denition, resources, evaluation methodology and metrics, participation and comparative results for the second task of the WEPS-3 evaluation campaign. The so-called Online- Reputation Management task consists of ltering Twitter posts contain- ing a given company name depending of whether the post is actually related with the company or not. Five research groups submitted results for the task.
UNED Online Reputation Monitoring Team at RepLab 2013.
"Piaf" vs "Adele": classifying encyclopedic queries using automatically labeled training data Encyclopedic queries express the intent of obtaining information typically available in encyclopedias, such as biographical, geographical or historical facts. In this paper, we train a classifier for detecting the encyclopedic intent of web queries. For training such a classifier, we automatically label training data from raw query logs. We use click-through data to select positive examples of encyclopedic queries as those queries that mostly lead to Wikipedia articles. We investigated a large set of features that can be generated to describe the input query. These features include both term-specific patterns as well as query projections on knowledge bases items (e.g. Freebase). Results show that using these feature sets it is possible to achieve an F1 score above 87%, competing with a Google-based baseline, which uses a much wider set of signals to boost the ranking of Wikipedia for potential encyclopedic queries. The results also show that both query projections on Wikipedia article titles and Freebase entity match represent the most relevant groups of features. When the training set contains frequent positive examples (i.e rare queries are excluded) results tend to improve.
An image multiresolution representation for lossless and lossy compression We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bit-shift operations. During its calculation, the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropy-coding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for progressive transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate versus distortion performance is comparable to those of the most efficient lossy compression methods.
Protocol Verification Via Projections The method of projections is a new approach to reduce the complexity of analyzing nontrivial communication protocols. A protocol system consists of a network of protocol entities and communication channels. Protocol entities interact by exchanging messages through channels; messages in transit may be lost, duplicated as well as reordered. Our method is intended for protocols with several distinguishable functions. We show how to construct image protocols for each function. An image protocol is specified just like a real protocol. An image protocol system is said to be faithful if it preserves all safety and liveness properties of the original protocol system concerning the projected function. An image protocol is smaller than the original protocol and can typically be more easily analyzed. Two protocol examples are employed herein to illustrate our method. An application of this method to verify a version of the high-level data link control (HDLC) protocol is described in a companion paper.
Using Abstraction and Model Checking to Detect Safety Violations in Requirements Specifications Exposing inconsistencies can uncover many defects in software specifications. One approach to exposing inconsistencies analyzes two redundant specifications, one operational and the other property-based, and reports discrepancies. This paper describes a "practical" formal method, based on this approach and the SCR (Software Cost Reduction) tabular notation, that can expose inconsistencies in software requirements specifications. Because users of the method do not need advanced mathematical training or theorem proving skills, most software developers should be able to apply the method without extraordinary effort. This paper also describes an application of the method which exposed a safety violation in the contractor-produced software requirements specification of a sizable, safety-critical control system. Because the enormous state space of specifications of practical software usually renders direct analysis impractical, a common approach is to apply abstraction to the specification. To reduce the state space of the control system specification, two "pushbutton" abstraction methods were applied, one which automatically removes irrelevant variables and a second which replaces the large, possibly infinite, type sets of certain variables with smaller type sets. Analyzing the reduced specification with the model checker Spin uncovered a possible safety violation. Simulation demonstrated that the safety violation was not spurious but an actual defect in the original specification.
Software Tools and Environments Any system that assists the program-mer with some aspect of programming can be considered a programming tool. Similarly, a system that assists in some phase of the software development pro-cess can be considered a software tool. A programming environment is a suite of programming tools designed to simplify programming and thereby enhance pro-grammer productivity. A software engi-neering environment extends this to software tools and the whole software development process. Software tools are categorized by the phase of software development and the particular problems that they address. Software environments are character-ized by the type and kinds of tools they contain and thus the aspects of software development they address. Additionally, software environments are distin-guished by how the tools they include are related, that is, the type and degree of integration among the tools, and by the size and nature of the systems they are designed to address. Software tools and environments are designed to enhance productivity. Many tools do this directly by automating or simplifying some task. Others do it indi-rectly, either by facilitating more pow-erful programming languages, architec-tures, or systems, or by making the software development task more enjoy-able. Still others attempt to enhance productivity by providing the user with information that might be needed for the task at hand.
A component-based framework for modeling and analyzing probabilistic real-time systems A challenging research issue of analyzing a real-time system is to model the tasks composing the system and the resource provided to the system. In this paper, we propose a probabilistic component-based model which abstracts in the interfaces both the functional and non-functional requirements of such systems. This approach allows designers to unify in the same framework probabilistic scheduling techniques and compositional guarantees that go from soft to hard real-time. We provide sufficient schedulability tests for task systems using such framework when the scheduler is either preemptive Fixed-Priority or Earliest Deadline First.
Analysis and Design of Secure Massive MIMO Systems in the Presence of Hardware Impairments. To keep the hardware costs of future communications systems manageable, the use of low-cost hardware components is desirable. This is particularly true for the emerging massive multiple-input multiple-output (MIMO) systems which equip base stations (BSs) with a large number of antenna elements. However, low-cost transceiver designs will further accentuate the hardware impairments, which are presen...
1.100217
0.100434
0.050394
0.050225
0.025411
0.010218
0.000206
0.000091
0
0
0
0
0
0
Toward synthesis from English descriptions This paper reports on a research project to design a system for automatically interpreting English specifications of digital systems in terms of design representation formalisms currently employed in CAD systems. The necessary processes involve the machine analysis of English and the synthesis of models from the specifications. The approach being investigated is interactive and consists of syntactic scanning, semantic analysis, interpretation generation, and model integration.
Automated assists to the behavioral modeling process The coding of behavioral models is a time consuming and error prone process. In this paper the authors describe automated assists to the behavioral modeling process which reduce the coding time and result in models which have a well defined structure making it easier to insure their accuracy. The approach uses a particular graphical representation for the model. An interactive tool then assists in converting the graphical representation to the behavioral HDL code. The authors discuss a pictorial representation for VHDL behavioral models. In VHDL an architectural body is used to define the behavior of a device. These architectural bodies are a set of concurrently running process. These processes are either process blocks or various forms of the signal assignment statements. One can give a pictorial representation to a behavioral architectural body by means of a process model graph (PMG)
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Mapping design knowledge from multiple representations The requirements and specifications documents which initiate and control design and development projects typically use a variety of formal and informal notational systems. The goal of the research reported is to automatically interpret requirement documents expressed in a variety of notations and to integrate the interpretations in order to support requirements analysis and synthesis from them. Because the source notations include natural language, a form of semantic net called conceptual graphs is adopted as the intermediate knowledge representation for expressing interpretations and integrating them. The focus is to describe the interpretation or mapping of a few requirements notations to conceptual graphs, and to indicate the process of joining these interpretations
Visual feedback for validation of informal specifications In automatically synthesizing simulation models from informal specifications, the ambiguity of natural language (English) leads to multiple interpretations The authors report on a system, called the Model Generator, which provides visual feedback showing the interpretation of specification statements that have been automatically translated to a knowledge representation called conceptual graphs. The visual feedback is based on a combination of block diagrams and Petri net graphs
Expanding the utility of semantic networks through partitioning An augmentation of semantic networks is presented in which the various nodes and arcs are partitioned into "net spaces." These net spaces delimit the scopes of quantified variables, distinguish hypothetical and imaginary situations from reality, encode alternative worlds considered in planning, and focus attention at particular levels of detail.
The transformation schema: An extension of the data flow diagram to represent control and timing The data flow diagram has been extensively used to model the data transformation aspects of proposed systems. However, previous definitions of the data flow diagram have not provided a comprehensive way to represent the interaction between the timing and control aspects of a system and its data transformation behavior. This paper describes an extension of the data flow diagram called the transformation schema. The transformation schema provides a notation and formation rules for building a comprehensive system. model, and a set of execution rules to allow prediction of the behavior over time of a system modeled in this way. The notation and formation rules allow depiction of a system as a network of potentially concurrent “centers of activity” (transformations), and of data repositories (stores), linked by communication paths (flows). The execution rules provide a qualitative prediction rather than a quantitative one, describing the acceptance of inputs and the production of outputs by the transformations but not input and output values. The transformation schema permits the creation and evaluation of two different types of system models. In the essential (requirements) model, the schema is used to represent a virtual machine with infinite resources. The elements of the schema depict idealized processing and memory components. In the implementation model, the schema is used to represent a real machine with limited resources, and the results of the execution predict the behavior of an implementation of requirements. The transformations of the schema can depict software running on digital processors, hard-wired digital or analog circuits, and so on, and the stores of the schema can depict disk files, tables in memory, and so on.
Constraining Pictures with Pictures
A framework for expressing the relationships between multiple views in requirements specification Composite systems are generally comprised of heterogeneous components whose specifications are developed by many development participants. The requirements of such systems are invariably elicited from multiple perspectives that overlap, complement, and contradict each other. Furthermore, these requirements are generally developed and specified using multiple methods and notations, respectively. It is therefore necessary to express and check the relationships between the resultant specification fragments. We deploy multiple ViewPoints that hold partial requirements specifications, described and developed using different representation schemes and development strategies. We discuss the notion of inter-ViewPoint communication in the context of this ViewPoints framework, and propose a general model for ViewPoint interaction and integration. We elaborate on some of the requirements for expressing and enacting inter-ViewPoint relationships-the vehicles for consistency checking and inconsistency management. Finally, though we use simple fragments of the requirements specification method CORE to illustrate various components of our work, we also outline a number of larger case studies that we have used to validate our framework. Our computer-based ViewPoints support environment, The Viewer, is also briefly described.
A roadmap for comprehensive online privacy policy management A framework supporting the privacy policy life cycle helps guide the kind of research to consider before sound privacy answers may be realized.
GRAIL/KAOS: An Environment for Goal-Driven Requirements Analysis, Integration and Layout The KAOS methodology provides a language, a method, and meta-level knowledge for goal-driven requirements elaboration. The language provides a rich ontology for capturing requirements in terms of goals, constraints, objects, actions, agents etc. Links between requirements are represented its well to capture refinements, conflicts, operationalizations, responsibility assignments, etc. The KAOS specification language is a multi-paradigm language with a two-level structure: an outer semantic net layer for declaring concepts, their attributes and links to other concepts, and an inner formal assertion layer for formally defining the concept. The latter combines a real-time temporal logic for the specification of goals, constraints, and objects, and standard pre-/postconditions for the specification of actions and their strengthening to ensure the constraints
Understanding the requirements for developing open source software systems This study presents an initial set of findings from an empirical study of social processes, technical system configurations, organizational contexts, and interrelationships that give rise to open software. The focus is directed at understanding the requirements for open software development efforts, and how the development of these requirements differs from those traditional to software engineering and requirements engineering. Four open software development communities are described, examined, and compared to help discover what these differences may be. Eight kinds of software informalisms are found to play a critical role in the elicitation, analysis, specification, validation, and management of requirements for developing open software systems. Subsequently, understanding the roles these software informalisms take in a new formulation of the requirements development process for open source software is the focus of this study. This focus enables considering a reformulation of the requirements engineering process and its associated artifacts or (in)formalisms to better account for the requirements for developing open source software systems.
Fuzzy Time Series Forecasting With a Probabilistic Smoothing Hidden Markov Model Since its emergence, the study of fuzzy time series (FTS) has attracted more attention because of its ability to deal with the uncertainty and vagueness that are often inherent in real-world data resulting from inaccuracies in measurements, incomplete sets of observations, or difficulties in obtaining measurements under uncertain circumstances. The representation of fuzzy relations that are obtained from a fuzzy time series plays a key role in forecasting. Most of the works in the literature use the rule-based representation, which tends to encounter the problem of rule redundancy. A remedial forecasting model was recently proposed in which the relations were established based on the hidden Markov model (HMM). However, its forecasting performance generally deteriorates when encountering more zero probabilities owing to fewer fuzzy relationships that exist in the historical temporal data. This paper thus proposes an enhanced HMM-based forecasting model by developing a novel fuzzy smoothing method to overcome performance deterioration. To deal with uncertainty more appropriately, the roulette-wheel selection approach is applied to probabilistically determine the forecasting result. The effectiveness of the proposed model is validated through real-world forecasting experiments, and performance comparison with other benchmarks is conducted by a Monte Carlo method.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.078406
0.067829
0.067829
0.067829
0.022693
0.000883
0.000031
0.000006
0.000001
0
0
0
0
0
Serializability in Distributed Systems with Handshaking
Multifaceted distributed systems specification using processes and event synchronization A new approach to modelling distributed systems is presented. It uses sequential processes and event synchronization as the major building blocks and is able to capture the functionality, architecture, scheduling policies, and performance attributes of a distributed system. The approach is meant to provide the foundation for a uniform incremental strategy for verifying both logical and performance properties of distributed systems. In addition, this approach draws together work on performance evaluation, resource allocation, and verification of concurrent processes by reducing some problems from the first two areas to equivalent problems in the third. A language called CSPS (an extension of Hoare's CSP) is used in the illustration of the approach. Employing CSP as a base allows modelled systems to be verified using techniques already developed for verifying CSP programs
Stepwise design of real-time systems The joint action approach to modeling of reactive systems is presented and augmented with real time. This leads to a stepwise design method where temporal logic of actions can be used for formal reasoning, superposition is the key mechanism for transformations, the advantages of closed-system modularity are utilized, logical properties are addressed before real-time properties, and real-time properties are enforced without any specific assumptions on scheduling. As a result, real-time modeling is made possible already at early stages of specification, and increased insensitivity is achieved with respect to properties imposed by implementation environments.
Specifications of Concurrently Accessed Data Our specification of the buffer illustrates how some of the requirements described in the introduction are met. The specification is concise, and it can be manipulated easily. This allowed us to derive several properties of the buffer (Appendix A) and construct a proof of buffer concatenation (Section 4). Also refinement of the specification with the eventual goal of implementation seems feasible with this scheme.
Operational specification with joint actions: serializable databases Joint actions are introduced as a language basis for operational specification of reactive systems. Joint action systems are closed systems with no communication primitives. Their nondeterministic execution model is based on multi-party actions without an explicit control flow, and they are amenable for stepwise derivation by superposition. The approach is demonstrated by deriving a specification for serializable databases in simple derivation steps. Two different implementation strategies are imposed on this as further derivations. One of the strategies is two-phase locking, for which a separate implementation is given and proved correct. The other is multiversion timestamp ordering, for which the derivation itself is an implementation.
Superposition and fairness in reactive system refinement An overview of the refinement calculus and of the action system paradigm for constructing parallel and reactive systems is given. Superposition is studied in detail, as an example of an important method for refinement of reactive programs. In connection with superposition, fairness of action system execution is considered, and a proof rule for preserving fairness in superposition refinement is given
A Correctness Proof of a Distributed Minimum-Weight Spanning Tree Algorithm (extended abstract)
The existence of refinement mappings Refinement mappings are used to prove that a lower-level specification correctly implements a higher-level one. The authors consider specifications consisting of a state machine (which may be infinite-state) that specifies safety requirements and an arbitrary supplementary property that specifies liveness requirements. A refinement mapping from a lower-level specification S/sub 1/ to higher-level one S/sub 2/ is a mapping from S/sub 1/'s state space to S/sub 2/'s state space that maps steps of S/sub 1/'s state machine steps to steps of S/sub 2/'s state machine and maps behaviors allowed by S/sub 1/ to behaviors allowed by S/sub 2/. It is shown that under reasonable assumptions about the specifications, if S/sub 1/ implements S/sub 2/, then by adding auxiliary variables to S/sub 1/ one can guarantee the existence of a refinement mapping. This provides a completeness result for a practical hierarchical specification method.<>
Refinement of Parallel and Reactive Programs We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement calculus. We exemplify the approach by a derivation of a mutual exclusion algorithm.
Data Refinement of Remote Procedures Recently the action systems formalism for parallel and distributed systems has been extended with the procedure mechanism. This gives us a very general framework for describing different communication paradigms for action systems, e.g. remote procedure calls. Action systems come with a design methodology based on the refinement calculus. Data refinement is a powerful technique for refining action systems. In this paper we will develop a theory and proof rules for the refinement of action systems that communicate via remote procedures based on the data refinement approach. The proof rules we develop are compositional so that modular refinement of action systems is supported. As an example we will especially study the atomicity refinement of actions. This is an important refinement strategy, as it potentially increases the degree of parallelism in an action system.
A Modeling Foundation for a Second Generation System Engineering Tool
Distributed State Space Generation of Discrete-State Stochastic Models <P>High-level formalisms such as stochastic Petri nets can be used to model complex systems. Analysis of logical and numerical properties of these models often requires the generation and storage of the entire underlying state space. This imposes practical limitations on the types of systems that can be modeled. Because of the vast amount of memory consumed, we investigate distributed algorithms for the generation of state space graphs. The distributed construction allows us to take advantage of the combined memory readily available on a network of workstations. The key technical problem is to find effective methods for on-the-fly partitioning, so that the state space is evenly distributed among processors. In this article we report on the implementation of a distributed state space generator that may be linked to a number of existing system modeling tools. We discuss partitioning strategies in the context of Petri net models, and report on performance observed on a network of workstations, as well as on a distributed memory multicomputer.</P>
Structuring and verifying distributed algorithms We present a structuring and verification method for distributed algorithms. The basic idea is that an algorithm to be verified is stepwise transformed into a high level specification through a number of steps, so-called coarsenings. At each step some mechanism of the algorithm is identified, verified and removed while the basic computation of the original algorithm is preserved. The method is based on a program development technique called superposition and it is formalized within the refinement calculus. We will show the usefulness of the method by verifying a complex distributed algorithm for minimum-hop route maintenance due to Chu.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.014354
0.01946
0.017444
0.016081
0.011308
0.00808
0.004219
0.001028
0.000061
0.000009
0
0
0
0
Exploiting Intra-Slice and Inter-Slice Redundancy for Learning-Based Lossless Volumetric Image Compression 3D volumetric image processing has attracted increasing attention in the last decades, in which one major research area is to develop efficient lossless volumetric image compression techniques to better store and transmit such images with massive amount of information. In this work, we propose the first end-to-end optimized learning framework for losslessly compressing 3D volumetric data. Our approach builds upon a hierarchical compression scheme by additionally introducing the intra-slice auxiliary features and estimating the entropy model based on both intra-slice and inter-slice latent priors. Specifically, we first extract the hierarchical intra-slice auxiliary features through multi-scale feature extraction modules. Then, an Intra-slice and Inter-slice Conditional Entropy Coding module is proposed to fuse the intra-slice and inter-slice information from different scales as the context information. Based on such context information, we can predict the distributions for both intra-slice auxiliary features and the slice images. To further improve the lossless compression performance, we also introduce two new gating mechanisms called Intra-Gate and Inter-Gate to generate the optimal feature representations for better information fusion. Eventually, we can produce the bitstream for losslessly compressing volumetric images based on the estimated entropy model. Different from the existing lossless volumetric image codecs, our end-to-end optimized framework jointly learns both intra-slice auxiliary features at different scales for each slice and inter-slice latent features from previously encoded slices for better entropy estimation. The extensive experimental results indicate that our framework outperforms the state-of-the-art hand-crafted lossless volumetric image codecs (e.g., JP3D) and the learning-based lossless image compression method on four volumetric image benchmarks for losslessly compressing both 3D Medical Images and Hyper-Spectral Images.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An improved time-delay implementation of derivative-dependent feedback. We consider an LTI system of relative degree r≥2 that can be stabilized using r−1 output derivatives. The derivatives are approximated by finite differences leading to a time-delayed feedback. We present a new method of designing and analyzing such feedback under continuous-time and sampled measurements. This method admits essentially larger time-delay/sampling period compared to the existing results and, for the first time, allows to use consecutively sampled measurements in the sampled-data case. The main idea is to present the difference between the derivative and its approximation in a convenient integral form. The kernel of this integral is hard to express explicitly but we show that it satisfies certain properties. These properties are employed to construct the Lyapunov–Krasovskii functional that leads to LMI-based stability conditions. If the derivative-dependent control exponentially stabilizes the system, then its time-delayed approximation stabilizes the system with the same decay rate provided the time-delay (for continuous-time measurements) or the sampling period (for sampled measurements) are small enough.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
From E-R to "A-R" - Modelling Strategic Actor Relationships for Business Process Reengineering
Models for supporting the redesign of organizational work Many types of models have been proposed for supporting organizational work. In this paper, we consider models that are used for supporting the redesign of organizational work. These models are used to help discover opportunities for improvements in organizations, introducing information technologies where appropriate. To support the redesign of organizational work, models are needed for describing work configurations, and for identifying issues, exploring alternatives, and evaluating them. Several approaches are presented and compared. The i* framework — consisting of the Strategic Dependency and Strategic Rationale models — is discussed in some detail, as it is expressly designed for modelling and redesigning organizational work. We argue that models which view organizational participants as intentional actors with motivations and intents, and abilities and commitments, are needed to provide richer representations of organizational work to support its effective redesign. The redesign of a bank loan operation is used as illustration.
A requirements and design aid for relational data bases A tool is described for defining data processing system requirements and for automatically generating data base designs from the requirements. The generated designs are specific to System R but the mapping rules are valid for the relational model in general and can be adapted to other data models as well. The requirements and design are stored in a System R data base, are cross-referenced with each other, and can be accessed and used for other purposes. The requirements are defined in terms of an organized common-sense semantic model and serve the function of the Conceptual Schema in the ANSI/SPARC three schema framework. The tool generates (synthesizes) relational designs that have no redundancy, no update anomalies, and are in 5th normal form. The requirements analysis and design generation procedures are illustrated with a case study.
DAIDA: an environment for evolving information systems We present a framework for the development of information systems based on the premise that the knowledge that influences the development process needs to somehow be captured, represented, and managed if the development process is to be rationalized. Experiences with a prototype environment developed in ESPRIT project DAIDA demonstrate the approach. The project has implemented an environment based on state-of-the-art languages for requirements modeling, design and implementation of information systems. In addition, the environment offers tools for aiding the mapping process from requirements to design and then to implementation, also for representing decisions reached during the development process. The development process itself is represented explicitly within the system, thus making the DAIDA development framework easier to comprehend, use, and modify.
A logic of action for supporting goal-oriented elaborations of requirements Constructing requirements specifications for a complex system is a quite difficult process. In this paper, we have focussed on the elaboration part of this process whete new requirements are progressively identified and incorporated in the requirements document. We propose a requirements specification language which, beyond the mere expression of requirements, also supports the elaboration step. This language is a Gist’s dialect where the concepts of goals and the one of agent characterized by some responsibility are identified. A formaliiation of this requirements language is proposed in terms of a non standard modal logic of actions.
An Executable Meta Model for Re-Engineering of Database Schemas A logical database schema, e.g. a relational one, is an implementation of a specification, e.g. an entity-relationship diagram. Upcoming new data models and the necessity of seamless integration of databases into application programs require a cost-effective method for mapping from one data model into the other. We present an approach where the mapping relationship is divided into three parts. The first part maps the input schema into a so-called meta model. The second part rearranges the intermediate representation, and the last part produces the schema in the target data model. A prototype has been implemented on top of a deductive object base manager for the mapping of relational schemas to entity-relationship diagrams. From this, a C++-based tool has been derived that will be part of a commercial CASE environment.
Representing and using nonfunctional requirements: a process-oriented approach A comprehensive framework for representing and using nonfunctional requirements during the development process is proposed. The framework consists of five basic components which provide the representation of nonfunctional requirements in terms of interrelated goals. Such goals can be refined through refinement methods and can be evaluated in order to determine the degree to which a set of nonfunctional requirements is supported by a particular design. Evidence for the power of the framework is provided through the study of accuracy and performance requirements for information systems.
Guiding goal modeling using scenarios Even though goal modeling is an effective approach to requirements engineering, it is known to present a number of difficulties in practice. The paper discusses these difficulties and proposes to couple goal modeling and scenario authoring to overcome them. Whereas existing techniques use scenarios to concretize goals, we use them to discover goals. Our proposal is to define enactable rules which form the basis of a software environment called L'Ecritoire to guide the requirements elicitation process through interleaved goal modeling and scenario authoring. The focus of the paper is on the discovery of goals from scenarios. The discovery process is centered around the notion of a requirement chunk (RC) which is a pair ⟨Goal, Scenario⟩. The paper presents the notion of RC, the rules to support the discovery of RCs and illustrates the application of the approach within L'Ecritoire using the ATM example. It also evaluates the potential practical benefits expected from the use of the approach
A Systematic Tradeoff Analysis for Conflicting Imprecise Requirements The need to deal with conflicting system requirements has become increasingly important over the past several years. Often, these requirements are elastic in that they can be satisfied to a degree. The overall goal of this research is to develop a formal framework that facilitates the identification and the tradeoff analysis of conflicting requirements by explicitly capturing their elasticity. Based on a fuzzy set theoretic foundation for representing imprecise requirements, we describe a systematic approach for analyzing the tradeoffs between conflicting requirements using the techniques in decision science. The systematic tradeoff analyses are used for three important tasks in the requirement engineering process: (1) for validating the structure used in aggregating prioritized requirements, (2) for identifying the structures and the parameters of the underlying representation of imprecise requirements, and (3) for assessing the priorities of conflicting requirements. We illustrate these techniques using the requirements of a conference room scheduling system.
RSF: a formalism for executable requirement specifications RSF is a formalism for specifying and prototyping systems with time constraints. Specifications are given via a set of transition rules. The application of a transition rule is dependent upon certain events. The occurrence times of the events and the data associated with them must satisfy given properties. As a consequence of the application of a rule, some events are generated and others are scheduled to occur in the future, after given intervals of time. Specifications can be queried, and the computation of answers to queries provides a generalized form of rapid prototyping. Executability is obtained by mapping the RSF rules into logic programming. The rationale, a definition of the formalism, the execution techniques which support the general notion of rapid prototyping and a few examples of its use are presented.
The external structure: Experience with an automated module interconnection language To study the problems of modifiable software, the Software Technology project has investigated approaches and methodologies that could improve modifiability. To test our approaches tools based on data abstraction-a design and programming language and a module interconnection language-were built and used. The incorporation of the module interconnection language into design altered the traditional model of system building. Introducing novices to our approach led to the formalization of new models of program design, development, and evaluation.
2009 Data Compression Conference (DCC 2009), 16-18 March 2009, Snowbird, UT, USA
MASA: a multithreaded processor architecture for parallel symbolic computing MASA is a “first cut” at a processor architecture intended as a building block for a multiprocessor that can execute parallel Lisp programs efficiently. MASA features a tagged architecture, multiple contexts, fast trap handling, and a synchronization bit in every memory word. MASA's principal novelty is its use of multiple contexts both to support multithreaded execution—interleaved execution from separate instruction streams—and to speed up procedure calls and trap handling in the same manner as register windows. A project is under way to evaluate MASA-like architectures for executing programs written in Multilisp.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.01394
0.012797
0.010608
0.006502
0.004268
0.002494
0.001717
0.000909
0.000389
0.000085
0.000006
0
0
0
Designing a VR interaction authoring tool using constructivist practices This paper describes the process of designing an authoring tool for virtual environments, using constructivist principles. The focus of the tool is on helping novice designers without coding experience to conceptualise and visualise the interactions of the virtual environment. According to constructivism, knowledge is constructed by people through interactions with their social and physical environments. Major aspects of this theory are explored, such as multiple representations, reflexivity, exploration, scaffolding and user control. Its practical application to the design of the tool is then described.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Cooperation without communication Intelligent agents must be able to interact even without the benefit of communication. In this paper we examine various constraints on the actions of agents in such situations and discuss the effects of these constraints on their derived utility. In particular, we define and analyze basic rationality; we consider various assumptions about independence; and we demonstrate the advantages of extending the definition of rationality from individual actions to decision procedures.
Distributed Intelligent Agents In Retsina, the authors have developed a distributed collection of software agents that cooperate asynchronously to perform goal-directed information retrieval and integration for supporting a variety of decision-making tasks. Examples for everyday organizational decision making and financial portfolio management demonstrate its effectiveness.
Validating Requirements for Fault Tolerant Systems using Model Checking Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system. One was an error in the detailed requirements, and the other two were missing/ ambiguous requirements. Because the method allows validation of partial specifications, it is also an effective approach for maintaining fidelity between a co-evolving specification and an implementation.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
Designing And Building A Negotiating Automated Agent Negotiations are very important in a multiagent environment, particularly, in an environment where there are conflicts between the agents, and cooperation would be beneficial. We have developed a general structure for a Negotiating Automated Agent that consists of five modules: a Prime Minister, a Ministry of Defense, a Foreign Office, a Headquarters and Intelligence. These modules are implemented using a dynamic set of local agents belonging to the different modules. We used this structure to develop a Diplomacy player, Diplomat. Playing Diplomacy involves a certain amount of technical skills as in other board games, but the capacity to negotiate, explain, convince, promise, keep promises or break them, is an essential ingredient in good play. Diplomat was evaluated and consistently played better than human players.
A metamodel approach for the management of multiple models and the translation of schemes A metamodel approach is proposed as a framework for the definition of different data models and the management of translations of schemes from one model to another. This notion is useful in an environment for the support of the design and development of information systems, since different data models can be used and schemes referring to different models need to be exchanged. The approach is based on the observation that the constructs used in the various models can be classified into a limited set of basic types, such as lexical type, abstract type, aggregation, function. It follows that the translations of schemes can be specified on the basis of translations of the involved types of constructs: this is effectively performed by means of a procedural language and a number of predefined modules that express the standard translations between the basic constructs.
Database design with common sense business reasoning and learning Automated database design systems embody knowledge about the database design process. However, their lack of knowledge about the domains for which databases are being developed significantly limits their usefulness. A methodology for acquiring and using general world knowledge about business for database design has been developed and implemented in a system called the Common Sense Business Reasoner, which acquires facts about application domains and organizes them into a a hierarchical, context-dependent knowledge base. This knowledge is used to make intelligent suggestions to a user about the entities, attributes, and relationships to include in a database design. A distance function approach is employed for integrating specific facts, obtained from individual design sessions, into the knowledge base (learning) and for applying the knowledge to subsequent design problems (reasoning).
Subsumption between queries to object-oriented databases Most work on query optimization in relational and object-oriented databaseshas concentrated on tuning algebraic expressions and the physical access tothe database contents. The attention to semantic query optimization, however,has been restricted due to its inherent complexity. We take a second lookat semantic query optimization in object-oriented databases and find thatreasoning techniques for concept languages developed in Artificial Intelligenceapply to this problem because concept...
Teamwork Support in a Knowledge-Based Information Systems Environment Development assistance for interactive database applications (DAIDA) is an experimental environment for the knowledge-assisted development and maintenance of database-intensive information systems from object-oriented requirements and specifications. Within the DAIDA framework, an approach to integrate different tasks encountered in software projects via a conceptual modeling strategy has been developed. Emphasis is put on integrating the semantics of the software development domain with aspects of group work, on social strategies to negotiate problems by argumentation, and on assigning responsibilities for task fulfillment by way of contracting. The implementation of a prototype is demonstrated with a sample session.
I-structures: data structures for parallel computing It is difficult to achieve elegance, efficiency, and parallelism simultaneously in functional programs that manipulate large data structures. We demonstrate this through careful analysis of program examples using three common functional data-structuring approaches-lists using Cons, arrays using Update (both fine-grained operators), and arrays using make-array (a “bulk” operator). We then present I-structure as an alternative and show elegant, efficient, and parallel solutions for the program examples in Id, a language with I-structures. The parallelism in Id is made precise by means of an operational semantics for Id as a parallel reduction system. I-structures make the language nonfunctional, but do not lose determinacy. Finally, we show that even in the context of purely functional languages, I-structures are invaluable for implementing functional data abstractions.
Refinement calculus, part I: sequential nondeterministic programs A lattice theoretic framework for the calculus of program refinement is presented. Specifications and program statements are combined into a single (infinitary) language of commands which permits miraculous, angelic and demonic statements to be used in the description of program behavior. The weakest precondition calculus is extended to cover this larger class of statements and a game-theoretic interpretation is given for these constructs. The language is complete, in the sense that every monotonic predicate transformer can be expressed in it. The usual program constructs can be defined as derived notions in this language. The notion of inverse statements is defined and its use in formalizing the notion of data refinement is shown.
Linear hybrid action systems Action Systems is a predicate transformer based formalism. It supports the development of provably correct reactive and distributed systems by refinement. Recently, Action Systems were extended with a differential action. It is used for modelling continuous behaviour, thus, allowing the use of refinement in the development of provably correct hybrid systems, i.e, a discrete controller interacting with some continuously evolving environment. However, refinement as a method is concerned with correctness issues only. It offers very little guidance in what details one should consider during the refinement steps to make the system more robust. That information is revealed by robustness analysis. Other formalisms not supporting refinement do have tool support for automating the robustness analysis, e.g., HyTech for linear hybrid automata. Consequently, we study in this paper the non-trivial translation problem between Action Systems and linear hybrid automata. As the main contribution, we give and prove correct an algorithm that translates a linear hybrid action system to a linear hybrid automaton. With this algorithm we combine the strengths of the two formalisms: we may use HyTech for the robustness analysis to guide the development by refinement.
Large project experiences with object-oriented methods and reuse
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.068039
0.066893
0.066893
0.066893
0.066893
0.066893
0.033725
0.017266
0.007524
0.000124
0.000001
0
0
0
A Survey of Deep Active Learning AbstractActive learning (AL) attempts to maximize a model’s performance gain while annotating the fewest samples possible. Deep learning (DL) is greedy for data and requires a large amount of data supply to optimize a massive number of parameters if the model is to learn how to extract high-quality features. In recent years, due to the rapid development of internet technology, we have entered an era of information abundance characterized by massive amounts of available data. As a result, DL has attracted significant attention from researchers and has been rapidly developed. Compared with DL, however, researchers have a relatively low interest in AL. This is mainly because before the rise of DL, traditional machine learning requires relatively few labeled samples, meaning that early AL is rarely according the value it deserves. Although DL has made breakthroughs in various fields, most of this success is due to a large number of publicly available annotated datasets. However, the acquisition of a large number of high-quality annotated datasets consumes a lot of manpower, making it unfeasible in fields that require high levels of expertise (such as speech recognition, information extraction, medical images, etc.). Therefore, AL is gradually coming to receive the attention it is due.It is therefore natural to investigate whether AL can be used to reduce the cost of sample annotation while retaining the powerful learning capabilities of DL. As a result of such investigations, deep active learning (DeepAL) has emerged. Although research on this topic is quite abundant, there has not yet been a comprehensive survey of DeepAL-related works; accordingly, this article aims to fill this gap. We provide a formal classification method for the existing work, along with a comprehensive and systematic overview. In addition, we also analyze and summarize the development of DeepAL from an application perspective. Finally, we discuss the confusion and problems associated with DeepAL and provide some possible development directions.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
The object-oriented systems life cycle In software engineering, the traditional description of the software life cycle is based on an underlying model, commonly referred to as the “waterfall” model (e.g., [4]). This model initially attempts to discretize the identifiable activities within the software development process as a linear series of actions, each of which must be completed before the next is commenced. Further refinements to this model appreciate that such completion is seldom absolute and that iteration back to a previous stage is likely. Various authors' descriptions of this model relate to the detailed level at which the software building process is viewed. At the most general level, three phases to the life cycle are generally agreed upon: 1) analysis, 2) design and 3) construction/implementation (e.g., [36], p. 262; [42]) (Figure 1(a)). The analysis phase covers from the initiation of the project, through to users-needs analysis and feasibility study (cf. [15]); the design phase covers the various concepts of system design, broad design, logical design, detailed design, program design and physical design. Following from the design stage(s), the computer program is written, the program tested, in terms of verification, validation and sensitivity testing, and when found acceptable, put into use and then maintained well into the future.In the more detailed description of the life cycle a number of subdivisions are identified (Figure 1(b)). The number of these subdivisions varies between authors. In general, the problem is first defined and an analysis of the requirements of current and future users undertaken, usually by direct and indirect questioning and iterative discussion. Included in this stage should be a feasibility study. Following this a user requirements definition and a software requirements specification, (SRS) [15], are written. The users requirements definition is in the language of the users so that this can be agreed upon by both the software engineer and the software user. The software requirements specification is written in the language of the programmer and details the precise requirements of the system. These two stages comprise an answer to the question of WHAT? (viz. problem definition). The user-needs analysis stage and examination of the solution space are still within the overall phase of analysis but are beginning to move toward not only problem decomposition, but also highlighting concepts which are likely to be of use in the subsequent system design; thus beginning to answer the question HOW? On the other hand, Davis [15] notes that this division into “what” and “how” can be subject to individual perception, giving six different what/how interpretations of an example telephone system. At this requirements stage, however, the domain of interest is still very much that of the problem space. Not until we move from (real-world) systems analysis to (software) systems design do we move from the problem space to the solution space (Figure 2). It is important to observe the occurrence and location of this interface. As noted by Booth [6], this provides a useful framework in object-oriented analysis and design.The design stage is perhaps the most loosely defined since it is a phase of progressive decomposition toward more and more detail (e.g., [41]) and is essentially a creative, not a mechanistic, process [42]. Consequently, systems design may also be referred to as “broad design” and program design as “detailed design” [20]. Brookes et al. [9] refer to these phases as “logical design” and “physical design.” In the traditional life cycle these two design stages can become both blurred and iterative; but in the object-oriented life cycle the boundary becomes even more indistinct.The software life cycle, as described above, is frequently implemented based on a view of the world interpreted in terms of a functional decomposition; that is, the primary question addressed by the systems analysis and design is WHAT does the system do viz. what is its function? Functional design, and the functional decomposition techniques used to achieve this, is based on the interpretation of the problem space and its translation to solution space as an interdependent set of functions or procedures. The final system is seen as a set of procedures which, apparently secondarily, operate on data.Functional decomposition is also a top-down analysis and design methodology. Although the two are not synonymous, most of the recently published systems analysis and design methods exhibit both characteristics (e.g., [14, 17]) and some also add a real-time component (e.g., [44]). Top-down design does impose some discipline on the systems analyst and program designer; yet it can be criticized as being too restrictive to support contemporary software engineering designs. Meyer [29] summarizes the flaws in top-down system design as follows:1. top-down design takes no account of evolutionary changes;2. in top-down design, the system is characterized by a single function—a questionable concept;3. top-down design is based on a functional mindset, and consequently the data structure aspect is often completely neglected;4. top-down design does not encourage reusability. (See also discussion in [41], p. 352 et seq.)
A methodolgy for deriving an entity-relationship model based on a data flow diagram This article descirbes an object-oriented methodology for deriving an entity-relationship (ER) model from requirements specified in a data flow diagram (DFD). The methodolgy is top down. It begins with an analysis of the objects described in the DFD to produce an object model. Modeling objects instead of individual data items reduces the number of data elements with which the analyst must be concerned initially. Next, information about data synonyms and interdependency is considered to refine the object model. Guidelines for removing redundant, overlapping descriptions of objects are also proposed and the object model is transformed into an ER model by applying a set of abstraction heuristics. The methodology integrates the DFD-based structured analysis methodology and the ER model so that process and data requirements can be analyzed simultaneously. It enables the system developer to understand the relationships between the processes (embedded in the DFD diagram) and data (described in the ER model) at the early stage of system analysis. Moreover, changes of requirements, either in process or in data, can be correlated easily to foster a quality design of the final system. This methodology can facilitate application development based on an existing data base or on an entirely new data base.
Object Interaction in Object-Oriented Deductive Conceptual Models We present the main components of an object-oriented deductive approach to conceptual modelling of information systems. This approach does not model object interaction explicitly. However interaction among objects can be derived by means of a formal procedure that we outline.
Templar: a knowledge-based language for software specifications using temporal logic A software specification language Templar is defined in this article. The development of the language was guided by the following objectives: requirements specifications written in Templar should have a clear syntax and formal semantics, should be easy for a systems analyst to develop and for an end-user to understand, and it should be easy to map them into a broad range of design specifications. Templar is based on temporal logic and on the Activity-Event-Condition-Activity model of a rule which is an extension of the Event-Condition-Activity model in active databases. The language supports a rich set of modeling primitives, including rules, procedures, temporal logic operators, events, activities, hierarchical decomposition of activities, parallelism, and decisions combined together into a cohesive system.
Automating the software development process Demand for reliable software systems is stressing software production capability, and automation is seen as a practical approach to increasing productivity and quality. Discussed in this paper are an approach and an architecture for automating the software development process. The concepts are developed from the viewpoint of the needs of the software development process, rather than that of established tools or technology. We discuss why automation of software development must be accomplished by evolutionary means. We define the architecture of a software engineering support facility to support long-term process experimentation, evolution, and automation. Such a facility would provide flexibility, tool portability, tool and process integration, and process automation for a wide range of methodologies and tools. We present the architectural concepts for such a facility and examine ways in which it can be used to foster software automation.
Software size estimation of object-oriented systems The strengths and weaknesses of existing size estimation techniques are discussed. The nature of software size estimation is considered. The proposed method takes advantage of a characteristic of object-oriented systems, the natural correspondence between specification and implementation, in order to enable users to come up with better size estimates at early stages of the software development cycle. Through a statistical approach the method also provides a confidence interval for the derived size estimates. The relation between the presented software sizing model and project cost estimation is also considered.
Supporting systems development by capturing deliberations during requirements engineering Support for various stakeholders involved in software projects (designers, maintenance personnel, project managers and executives, end users) can be provided by capturing the history about design decisions in the early stages of the system's development life cycle in a structured manner. Much of this knowledge, which is called the process knowledge, involving the deliberation on alternative requirements and design decisions, is lost in the course of designing and changing such systems. Using an empirical study of problem-solving behavior of individual and groups of information systems professionals, a conceptual model called REMAP (representation and maintenance of process knowledge) that relates process knowledge to the objects that are created during the requirements engineering process has been developed. A prototype environment that provides assistance to the various stakeholders involved in the design and management of large systems has been implemented.
Petri nets: Properties, analysis and applications Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets
The architecture and design of a collaborative environment for systems definition Defining systems requirements and specifications is a collaborative effort among managers, users, and systems developers. The difficulty of systems definition is caused by the human's limited cognitive capabilities, that is compounded by the complexity of group communication and coordination processes. Current system analysis methodologies are first evaluated regarding to the level of support to users. Since systems definition is a knowledge-intensive activity, the knowledge contents and structures employed in systems definition are discussed. For any large-scale system, no one person possesses all the knowledge that is needed, therefore, the authors proposed a collaborative approach to systems definition. The use of a group decision support system (GDSS) for systems definition is first described and limitations of the current GDSS are identified. The architecture and design of a collaborative computer-aided software engineering (CASE) environment, called C-CASE, is then discussed. C-CASE can be used to assist users in defining the requirements of their organization and information systems as well as to analyze the consistency and completeness of the requirements. C-CASE integrates GDSS and CASE such that users can actively participate in the requirements elicitation process. Users can use the metasystem capability of C-CASE to define domain specific systems definition languages, which are adaptable to different systems development settings. An example of using C-CASE in a collaborative environment is given. The implications C-CASE and the authors' ongoing research are also discussed.
A Classification Framework to Support the Design of Visual Languages An important step in the design of visual languages is the specification of the graphical objects and the composition rules for constructing feasible visual sentences. The presence of different typologies of visual languages, each with specific graphical and structural characteristics, yields the need to have models and tools that unify the design steps for different types of visual languages. To this aim, in this paper we present a formal framework of visual language classes. Each class characterizes a family of visual languages based upon the nature of their graphical objects and composition rules. The framework has been embedded in the Visual Language Compiler–Compiler (VLCC), a graphical system for the automatic generation of visual programming environments.
Superposed Automata Nets
Finding Response-Times In A Real-Time System There are two major performance issues in a real-time system where a processor has a set of devices connected to it at different priority levels. The first is to prove whether, for a given assignment of devices to priority levels, the system can handle its peak processing load without losing any inputs from the devices. The second is to determine the response time for each device. There may be several ways of assigning the devices to priority levels so that the peak processing load is met, but only some (or perhaps none) of these ways will also meet the response-time requirements for the devices. In this paper, we define a condition that must be met to handle the peak processing load and describe how exact worst-case response times can then be found. When the condition cannot be met, we show how the addition of buffers for inputs can be useful. Finally, we discuss the use of multiple processors in systems for real-time applications.
Fast Piecewise Linear Predictors For Lossless Compression Of Hyperspectral Imagery The work presented here deals with the design of predictors for the lossless compression of hyperspectral imagery. The large number of spectral bands that characterize hyperspectral imagery give it properties that can be exploited when performing compression. Specifically, in addition to the spatial correlation which is similar to all images, the large number of spectral bands means a high spectral correlation also. Lossless compression algorithms are typically divided into two stages, a decorrelation stage and a coding stage. This work deals with the design of predictors for the decorrelation stage which are both fast and good. Fast implies low complexity, which was achieved by having predictors with no multiplications, only comparisons and additions. Good means predictors that have performance close to the state of the art. To achieve this, both spectral and spatial correlations are used for the predictor. The performance of the developed predictors are compared to those in the most widely known algorithms, LOCO-I, used in JPEG-Lossless, and CALIC-Extended, the original version of which had the best compression performance of all the algorithms submitted to the JPEG-LS committee. The developed algorithms are shown to be much less complex than CALIC-Extended with better compression performance.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.011528
0.014792
0.008602
0.008052
0.006586
0.003227
0.001516
0.000237
0.000046
0.000018
0.000001
0
0
0
SentiBubbles: Topic Modeling and Sentiment Visualization of Entity-centric Tweets. Social Media users tend to mention entities when reacting to news events. The main purpose of this work is to create entity-centric aggregations of tweets on a daily basis. By applying topic modeling and sentiment analysis, we create data visualization insights about current events and people reactions to those events from an entity-centric perspective.
Developing Smart Cities Services through Semantic Analysis of Social Streams This paper presents a domain-agnostic framework for intelligent processing of textual streams coming from social networks. The framework implements a pipeline of techniques for semantic representation, sentiment analysis, automatic content classification, and provides an analytics console to get some findings from the extracted data. The effectiveness of the platform has already been proved by deploying it in two smart cities-related scenarios: in the first it was exploited to monitor the recovering state of the social capital of L'Aquila's city after the dreadful earthquake of April 2009, while in the latter a semantic analysis of the content posted on social networks was performed to build a map of the most at-risk areas of the Italian territory. In both scenarios, the outcomes resulting from the analysis confirmed the insight that the adoption of methodologies for intelligent and semantic analysis of textual content can provide interesting findings useful to improve the understanding of very complex phenomena.
Mining social media for open innovation in transportation systems This work proposes a novel framework for the development of new products and services in transportation through an open innovation approach based on automatic content analysis of social media data. The framework is able to extract users comments from Online Social Networks (OSN), to process and analyze text through information extraction and sentiment analysis techniques to obtain relevant information about product reception on the market. A use case was developed using the mobile application Uber, which is today one of the fastest growing technology companies in the world. We measured how a controversial, highly diffused event influences the volume of tweets about Uber and the perception of its users. While there is no change in the image of Uber, a large increase in the number of tweets mentioning the company is observed, which meant a free and important diffusion of its product.
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:[email protected]), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled...
A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
An ontological model of an information system An ontological model of an information system that provides precise definitions of fundamental concepts like system, subsystem, and coupling is proposed. This model is used to analyze some static and dynamic properties of an information system and to examine the question of what constitutes a good decomposition of an information system. Some of the major types of information system formalisms that bear on the authors' goals and their respective strengths and weaknesses relative to the model are briefly reviewed. Also articulated are some of the fundamental notions that underlie the model. Those basic notions are then used to examine the nature and some dynamics of system decomposition. The model's predictive power is discussed.
DOODLE: a visual language for object-oriented databases In this paper we introduce DOODLE, a new visual and declarative language for object-oriented databases. The main principle behind the language is that it is possible to display and query the database with arbitrary pictures. We allow the user to tailor the display of the data to suit the application at hand or her preferences. We want the user-defined visualizations to be stored in the database, and the language to express all kinds of visual manipulations. For extendibility reasons, the language is object-oriented. The semantics of the language is given by a well-known deductive query language for object-oriented databases. We hope that the formal basis of our language will contribute to the theoretical study of database visualizations and visual query languages, a subject that we believe is of great interest, but largely left unexplored.
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
Generalized Jensen Inequalities with Application to Stability Analysis of Systems with Distributed Delays over Infinite Time-Horizons. The Jensen inequality has been recognized as a powerful tool to deal with the stability of time-delay systems. Recently, a new inequality that encompasses the Jensen inequality was proposed for the stability analysis of systems with finite delays. In this paper, we first present a generalized integral inequality and its double integral extension. It is shown how these inequalities can be applied to improve the stability result for linear continuous-time systems with gamma-distributed delays. Then, for the discrete-time counterpart we provide an extended Jensen summation inequality with infinite sequences, which leads to less conservative stability conditions for linear discrete-time systems with Poisson-distributed delays. The improvements obtained by the introduced generalized inequalities are demonstrated through examples.
1.1
0.05
0.05
0
0
0
0
0
0
0
0
0
0
0
Stability Analysis of Continuous-Time Switched Neural Networks With Time-Varying Delay Based on Admissible Edge-Dependent Average Dwell Time This article investigates the stability of the switched neural networks (SNNs) with a time-varying delay. To effectively guarantee the stability of the considered system with unstable subsystems and reduce conservatism of the stability criteria, admissible edge-dependent average dwell time (AED-ADT) is first utilized to restrict switching signals for the continuous-time SNNs, and multiple Lyapunov...
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Software Process Validation - Comparing Process and Practice Models. To assure the quality of software processes, models play an impor- tant role. Process models represent the officially sanctioned software develop- ment processes in the organization. Although important, they are not sufficient, since the practices of software developers often differ considerably from the of- ficial process. Practice models, describing the way software development is really done, are an important source of information for validating the software process. Using conceptual graph theory, we present a formal method for repre- senting and comparing process and practice models in various combinations. The method allows for differences between these models to be easily detected. Software developers, such as managers or engineers, can then interpret these differences to make recommendations for software process improvement.
Towards a semantic metrics suite for object-oriented design In recent years, much work has been performed in developing suites of metrics that are targeted for object-oriented software, rather than functionally oriented software. This is necessary since good object-oriented software has several characteristics, such as inheritance and polymorphism that are not usually present in functionally oriented software. However, all of these object-oriented metrics suites have been defined using only syntactic aspects of object-oriented software; indeed, the earlier functionally-oriented metrics were also calculated using only syntactic information. All syntactically oriented metrics have the problem that the mapping from the metric to the quality the metric purports to measure, such as the software quality factor 驴cohesion,驴 is indirect, and often arguable. Thus, a substantial amount of research effort goes into proving that these syntactically oriented metrics actually do measure their associated quality factors.This paper introduces a new suite of semantically derived object-oriented metrics, which provide a more direct mapping from the metric to its associated quality factor than is possible using syntactic metrics. These semantically derived metrics are calculated using knowledge-based, program understanding, and natural language processing techniques.
Proceedings of the 2nd International Conference on Pragmatic Web, ICPW 2007, Tilburg, The Netherlands, October 22-23, 2007
Experiences in Automating the Analysis of Linguistic Interactions for the Study of Distributed Collectives An important issue faced by research on distributed collective practices is the amount and nature of the data available for study. While persistent mediated interaction offers unprecedented opportunities for research, the wealth and richness of available data pose issues on their own, calling for new methods of investigation. In such a context, automated tools can offer coverage, both within and across collectives. In this paper, we investigate the potential contributions of semantic analyses of linguistic interactions for the study of collective processes and practices. In other words, we are interested in discovering how linguistic interaction is related to collective action, as well as in exploring how computational tools can make use of these relationships for the study of distributed collectives.
Making Workflow Change Acceptable Virtual professional communities are supported by network information systems composed from standard Internet tools. To satisfy the interests of all community members, a user-driven approach to requirements engineering is proposed that produces not only mean- ingful but also acceptable specifications. This approach is especially suited for workflow systems that support partially structured, evolving work processes. To ensure the acceptability, social norms must guide the specifica- tion process. The RENISYS specification method is introduced, which facilitates this process using composi- tion norms as formal representations of social norms. Conceptual graph theory is used to represent four categories of knowledge definitions: type definitions, state definitions, action norms and composition norms. It is shown how the composition norms guide the legitimate user-driven specification process by analysing a case on the development of an electronic law journal.
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
Algebraic tools for the performance evaluation of discrete event systems In this paper, it is shown that a certain class of Petri nets called event graphs can be represented as linear "time-invariant" flnite-dimensional sys- tems using some particular algebras. This sets the ground on which a theory of these systems can be developped in a manner which is very analogous to that of conventional linear system theory. Part 2 of the paper is devoted to showing some preliminary basic developments in that direction. Indeed, there are several ways in which one can consider event graphs as linear sys- tems: these ways correspond to approaches in the time domain, in the event domain and in a two-dimensional domain. In each of these approaches, a difierent algebra has to be used for models to remain linear. However, the common feature of these algebras is that they all fall into the axiomatic deflnition of "dioids". Therefore, Part 1 of the paper is devoted to a unifled presentation of basic algebraic results on dioids.
Specifying software requirements for complex systems: new techniques and their application This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.
Further Improvement of Free-Weighting Matrices Technique for Systems With Time-Varying Delay A novel method is proposed in this note for stability analysis of systems with a time-varying delay. Appropriate Lyapunov functional and augmented Lyapunov functional are introduced to establish some improved delay-dependent stability criteria. Less conservative results are obtained by considering the additional useful terms (which are ignored in previous methods) when estimating the upper bound of the derivative of Lyapunov functionals and introducing the new free-weighting matrices. The resulting criteria are extended to the stability analysis for uncertain systems with time-varying structured uncertainties and polytopic-type uncertainties. Numerical examples are given to demonstrate the effectiveness and the benefits of the proposed method
Protocol verification as a hardware design aid The role of automatic formal protocol verificationin hardware design is considered. Principlesare identified that maximize the benefits of protocolverification while minimizing the labor andcomputation required. A new protocol descriptionlanguage and verifier (both called Mur") are described,along with experiences in applying themto two industrial protocols that were developed aspart of hardware designs.1 IntroductionMost complex digital designs must be regarded as...
The Conical Methodology and the evolution of simulation model development Originating with ideas generated in the mid-1970s, the Conical Methodology (CM) is the oldest procedural approach to simulation model development. This evolutionary overview describes the principles underlying the CM, the environment structured according to these principles, and the capabilities for large complex simulation modeling tasks not provided in textbook descriptions. The CM is an object-oriented, hierarchical specification language that iteratively prescribes object attributes in a definitional phase that is topdown, followed by a specification phase that is bottom-up. The intent is to develop successive model representations at various levels of abstraction that can be diagnosed for correctness, completeness, consistency, and other characteristics prior to implementation as an executable program. Related or competitive approaches, throughout the evolutionary period are categorized as emanating from: artificial intelligence, mathematical programming, software engineering, conceptual modeling, systems theory, logic-based theory, or graph theory. Work in each category is briefly described.
The navigation toolkit The problem
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.24
0.24
0.24
0.24
0.08
0
0
0
0
0
0
0
0
0
Revisiting Snapshot Algorithms by Refinement-Based Techniques The snapshot problem addresses a collection of important algorithmic issues related to the distributed computations, which are used for debugging or recovering the distributed programs. Among the existing solutions, Chandy and Lamport propose a simple distributed algorithm. In this paper, we explore the correct-by-construction process to formalize the snapshot algorithms in distributed system. The formalization process is based on a modeling language Event B, which supports a refinement-based incremental development using RODIN platform. These refinement-based techniques help to derive a correct distributed algorithm. Moreover, we demonstrate how this class of other distributed algorithms can be revisited. A consequence is to provide a fully mechanized proof of the distributed algorithms.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An assertional correctness proof of a distributed algorithm Using ordinary assertional methods for concurrent program verification, we prove the correctness of a distributed algorithm for maintaining message-routing tables in a network with communication lines that can fail. This shows that assertional reasoning about global states works well for distributed as well as nondistributed algorithms.
Stepwise Refinement and Concurrency: A Small Exercise A simple methodology for the design of concurrent programs is illustrated by a short example. This methodology formalizes the classical concept of stepwise refinement.
Stepwise refinement and concurrency: the finite-state case A simple methodology for the design and the verification of finite-state concurrent programs is proposed and illustrated by a short example. In most cases, this methodology is likely to be more systematic than the technique of stepwise refinement, and more efficient than the fixpoint-based method.
A Correctness Proof of a Distributed Minimum-Weight Spanning Tree Algorithm (extended abstract)
An Action System Specification of the Caltech Asynchronous Microprocessor The action system framework for modelling parallel programs is used to formally specify a microprocessor. First the microprocessor is specified as a sequential program. The sequential specification is then decomposed and refined into a concurrent program using correctness-preserving program transformations. Previously this microprocessor has been specified in a semi-formal manner at Caltech, where an asynchronous circuit for the microprocessor was derived from the specification. We propose a...
A framework for Incorporating trust into formal systems development Formal methods constitute a means of developing realiable and correctly behaving software based on a specification. In scenarios where information technology is used as a foundation to enable human communication, this is, however, not always enough. Successful interaction between humans often depends on the concept of trust, which is different from program correctness. In this paper, we present a framework for integratig trust into a fomal development process, allowing for the construction of formally correct programs for communication, embracing trust as a central concept. We present a coordination language for use with action systems, taking a modular approach of separating trust aspects from other functionality. We also believe that our work can be adapted to modellin other aspects beside trust. Throughout the paper, we employ a case study as a testbed for our concepts.
Creating sequential programs from event-B models Event-B is an emerging formal method with good tool support for various kinds of system modelling. However, the control flow in Event-B consists only of non-deterministic choice of enabled events. In many applications, notably in sequential program construction, more elaborate control flow mechanisms would be convenient. This paper explores a method, based on a scheduling language, for describing the flow of control. The aim is to be able to express schedules of events; to reason about their correctness; to create and verify patterns for introducing correct control flow. The conclusion is that using patterns, it is feasible to derive efficient sequential programs from event-based specifications in many cases.
A Methodology for Developing Distributed Programs A methodology, different from the existing ones, for constructing distributed programs is presented. It is based on the well-known idea of developing distributed programs via synchronous and centralized programs. The distinguishing features of the methodology are: 1) specification include process structure information and distributed programs are developed taking this information into account, 2) a new class of programs, called PPSA's, is used in the development process, and 3) a transformational approach is suggested to solve the problems inherent in the method of developing distributed programs through synchronous and centralized programs. The methodology is illustrated with an example.
Proving entailment between conceptual state specifications The lack of expressive power of temporal logic as a specification language can be compensated to a certain extent by the introduction of powerful, high-level temporal operators, which are difficult to understand and reason about. A more natural way to increase the expressive power of a temporal specification language is by introducing conceptual state variables , which are auxiliary (unimplemented) variables whose values serve as an abstract representation of the internal state of the process being specified. The kind of specifications resulting from the latter approach are called conceptual state specifications . This paper considers a central problem in reasoning about conceptual state specifications: the problem of proving entailment between specifications. A technique, based on the notion of simulation between machines , is shown to be sound for proving entailment. A kind of completeness result can also be shown if specifications are assumed to satisfy well-formedness conditions. The role played by entailment in proofs of correctness is illustrated by the problem of proving that the concatenation of two FIFO buffers implements a FIFO buffer.
A compositional axiomatization of Statecharts Statecharts is a behavioural specification language proposed for specifying large real-time, event-driven reactive systems. It is a graphical language based on state-transition diagrams for finite state machines extended with many features like hierarchy, concurrency, broadcast communication and time-out. We supply Statecharts with a compositional axiomatization for both safety and liveness properties. By generating external events symbolically, Statecharts can be executed, thereby turning it into a programming language for real-time concurrency (as well as enabling rapid prototyping). As such it is well suited for compositional program verification. In addition to our compositional axiomatic system, we give a denotational semantics and prove that the axiomatization is sound and relatively complete with respect to this semantics.
Safeware: system safety and computers
Run-length encodings (Corresp.) First Page of the Article
Heuristic search in PARLOG using replicated worker style parallelism Most concurrent logic programming languages hide the distribution of processes among physical processors from the programmer. For parallel applications based on heuristic search, however, it is important for the programmer to accurately control this distribution. With such applications, an inferior distribution strategy easily leads to enormous search overheads, thus decreasing speedup on parallel hardware.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.017798
0.016549
0.01303
0.00887
0.005905
0.003429
0.001771
0.000315
0.000027
0.000001
0
0
0
0
Assessing identification of compliance requirements from privacy policies In the United States, organizations can be held liable by the Federal Trade Commission for the statements they make in their privacy policies. Thus, organizations must include their privacy policies as a source of requirements in order to build systems that are policy-compliant. In this paper, we describe an empirical user study in which we measure the ability of requirements engineers to effectively extract compliance requirements from a privacy policy using one of three analysis approaches—CPR (commitment, privilege, and right) analysis, goal-based analysis, and non-methodassisted (control) analysis. The results of these three approaches were then compared to an expert-produced set of expected compliance requirements. The requirements extracted by the CPR subjects reflected a higher percentage of requirements that were expected compliance requirements as well as a higher percentage of the total expected compliance requirements. In contrast, the goal-based and control subjects produced a higher number of synthesized requirements, or requirements not directly derived from the policy than the CPR subjects. This larger number of synthesized requirements may be attributed to the fact that these two subject groups employed more inquiry-driven approaches than the CPR subjects who relied primarily on focused and direct extraction of compliance requirements.
Formal analysis of privacy requirements specifications for multi-tier applications Companies require data from multiple sources to develop new information systems, such as social networking, e-commerce and location-based services. Systems rely on complex, multi-stakeholder data supply-chains to deliver value. These data supply-chains have complex privacy requirements: privacy policies affecting multiple stakeholders (e.g. user, developer, company, government) regulate the collection, use and sharing of data over multiple jurisdictions (e.g. California, United States, Europe). Increasingly, regulators expect companies to ensure consistency between company privacy policies and company data practices. To address this problem, we propose a methodology to map policy requirements in natural language to a formal representation in Description Logic. Using the formal representation, we reason about conflicting requirements within a single policy and among multiple policies in a data supply chain. Further, we enable tracing data flows within the supply-chain. We derive our methodology from an exploratory case study of Facebook platform policy. We demonstrate the feasibility of our approach in an evaluation involving Facebook, Zynga and AOL-Advertising policies. Our results identify three conflicts that exist between Facebook and Zynga policies, and one conflict within the AOL Advertising policy.
Automated text mining for requirements analysis of policy documents Businesses and organizations in jurisdictions around the world are required by law to provide their customers and users with information about their business practices in the form of policy documents. Requirements engineers analyze these documents as sources of requirements, but this analysis is a time-consuming and mostly manual process. Moreover, policy documents contain legalese and present readability challenges to requirements engineers seeking to analyze them. In this paper, we perform a large-scale analysis of 2,061 policy documents, including policy documents from the Google Top 1000 most visited websites and the Fortune 500 companies, for three purposes: (1) to assess the readability of these policy documents for requirements engineers; (2) to determine if automated text mining can indicate whether a policy document contains requirements expressed as either privacy protections or vulnerabilities; and (3) to establish the generalizability of prior work in the identification of privacy protections and vulnerabilities from privacy policies to other policy documents. Our results suggest that this requirements analysis technique, developed on a small set of policy documents in two domains, may generalize to other domains.
Towards Regulatory Compliance: Extracting Rights and Obligations to Align Requirements with Regulations In the United States, federal and state regulations prescribe stakeholder rights and obligations that must be satisfied by the requirements for software systems. These regulations are typically wrought with ambiguities, making the process of deriving system requirements ad hoc and error prone. In highly regulated domains such as healthcare, there is a need for more comprehensive standards that can be used to assure that system requirements conform to regulations. To address this need, we expound upon a process called Semantic Parameterization previously used to derive rights and obligations from privacy goals. In this work, we apply the process to the Privacy Rule from the U.S. Health Insurance Portability and Accountability Act (HIPAA). We present our methodology for extracting and prioritizing rights and obligations from regulations and show how semantic models can be used to clarify ambiguities through focused elicitation and to balance rights with obligations. The results of our analysis can aid requirements engineers, standards organizations, compliance officers, and stakeholders in assuring systems conform to policy and satisfy requirements.
Formal Derivation of Strongly Correct Concurrent Programs. Summary  A method is described for deriving concurrent programs which are consistent with the problem specifications and free from deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary variables is also given. The applicability of the techniques presented is discussed through various examples; their use for verification purposes is illustrated as well.
A mathematical perspective for software measures research Basic principles which necessarily underlie software measures research are analysed. In the prevailing paradigm for the validation of software measures, there is a fundamental assumption that the sets of measured documents are ordered and that measures should report these orders. The authors describe mathematically, the nature of such orders. Consideration of these orders suggests a hierarchy of software document measures, a methodology for developing new measures and a general approach to the analytical evaluation of measures. They also point out the importance of units for any type of measurement and stress the perils of equating document structure complexity and psychological complexity
Distributed snapshots: determining global states of distributed systems This paper presents an algorithm by which a process in a distributed system determines a global state of the system during a computation. Many problems in distributed systems can be cast in terms of the problem of detecting global states. For instance, the global state detection algorithm helps to solve an important class of problems: stable property detection. A stable property is one that persists: once a stable property becomes true it remains true thereafter. Examples of stable properties are “computation has terminated,” “ the system is deadlocked” and “all tokens in a token ring have disappeared.” The stable property detection problem is that of devising algorithms to detect a given stable property. Global state detection can also be used for checkpointing.
ACE: building interactive graphical applications
Duality in specification languages: a lattice-theoretical approach A very general lattice-based language of commands, based on theprimitive operations of substitution and test for equality, isconstructed. This base language permits unbounded nondeterminism,demonic and angelic nondeterminism. A dual language permitting miraclesis constructed. Combining these two languages yields an extended baselanguage which is complete, in the sense that all monotonic predicatetransformers can be constructed in it. The extended base languageprovides a unifying framework for various specification languages; weshow how two Dijkstra-style specification languages can be embedded init.—Authors' Abstract
Abstract Syntax and Semantics of Visual Languages The effective use of visual languages requires a precise understanding of their meaning. Moreover, it is impossible to prove properties of visual languages like soundness of transformation rules or correctness results without having a formal language definition. Although this sounds obvious, it is surprising that only little work has been done about the semantics of visual languages, and even worse, there is no general framework available for the semantics specification of different visual languages. We present such a framework that is based on a rather general notion of abstract visual syntax. This framework allows a logical as well as a denotational approach to visual semantics, and it facilitates the formal reasoning about visual languages and their properties. We illustrate the concepts of the proposed approach by defining abstract syntax and semantics for the visual languages VEX, Show and Tell and Euler circles. We demonstrate the semantics in action by proving a rule for visual reasoning with Euler circles and by showing the correctness of a Show and Tell program.
Foundations of 4Thought 4Thought, a prototype design tool, is based on the notion that design artifacts are complex, formal, mathematical objects that require complementary textual and graphical views to be adequately comprehended. This paper describes the combined use of Entity- Relationship modelling and GraphLog to bridge the textual and graphical views. These techniques are illustrated by an example that is formally specified in Z Notation.
The navigation toolkit The problem
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.1
0.1
0.005882
0
0
0
0
0
0
0
0
0
0
A reuse base for real-time software specifications Reuse promises to be one of the key factors in enhancing quality and productivity in software development. However, the existing methods and CASE tools for real-time systems are usually focused on the development of software as a single, “disposable” product only. In this paper we describe a domain-based reuse system for the reuse and evolution of structured specifications and designs of embedded systems. ∗ ∗ This research was carried out as a part of the FINPRIT research programme, funded mainly by the Technology Development Centre of Finland (TEKES). Financial support was also provided by the Technical Research Centre of Finland (VTT), Kone, Nokia-Mobira, and Edacom Corporations.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Reversible data hiding by adaptive group modification on histogram of prediction errors. In this work, the conventional histogram shifting (HS) based reversible data hiding (RDH) methods are first analyzed and discussed. Then, a novel HS based RDH method is put forward by using the proposed Adaptive Group Modification (AGM) on the histogram of prediction errors. Specifically, in the proposed AGM method, multiple bins are vacated based on their magnitudes and frequencies of occurrences by employing an adaptive strategy. The design goals are to maximize hiding elements while minimizing shifting and modification elements to maintain image high quality by giving priority to the histogram bins utilized for hiding. Furthermore, instead of hiding only one bit at a time, the payload is decomposed into segments and each segment is hidden by modifying a triplet of prediction errors to suppress distortion. Experimental results show that the proposed AGM technique outperforms the current state-of-the-art HS based RDH methods. As a representative result, the proposed method achieves an improvement of 4.30 dB in terms of PSNR when 105,000 bits are hidden into the test Lenna image.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On structuring formal, semi-formal and informal data to support traceability in systems engineering environments The development of large, complex systems poses a number of challenges for systems engineers, not least of which is the ability to ensure user requirements have been satisfied. Effective requirements management - an amalgam of information capture, information storage and management, and information dissemination activities - is crucial in that respect. In this paper we concentrate on one of the core issues of information management in a requirements management context - namely traceability. Traceability is the common term for mechanisms to record and navigate relationships between artifacts produced by development processes. However, realising effective traceability in systems engineering environments is complicated by the fact that engineers use a range of notations to describe complex systems. These range from natural language (informal), to graphical notations such as Statecharts (semi-formal) to languages with a well defined (formal) semantics such as VDM-SL and SPARK Ada. Most have tool support, although a lack of well-defined approaches to integration leads to inconsistencies and limits traceability between their respective data sets (internal models). This paper demonstrates an approach based on meta-modelling that enables traceability links to be established and consistency maintained between tools.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
ConceptBase—a deductive object base for meta data management Deductive object bases attempt to combine the advantages of deductive relational databases with those of object-oriented databases. We review modeling and implementation issues encountered during the development of ConceptBase, a prototype deductive object manager supporting the Telos object model. Significant features include: 1) The symmetric treatment of object-oriented, logic-oriented and graph-oriented perspectives, 2) an infinite metaclass hierarchy as a prerequisite for extensibility and schema evolution, 3) a simple yet powerful formal semantics used as the basis for implementation, 4) a client-server architecture supporting collaborative work in a wide-area setting. Several application experiences demonstrate the value of the approach especially in the field of meta data management.
Electronic Brokering for Assisted Contracting of Software Applets In the new era of electronic commerce, with its efficiencyand globalization of communication, the ability to quicklycreate mutually beneficial contracts will become a criticalsuccess factor for organizations. Unfortunately, volatilecomplex markets demand dynamic deal making not effectivelyaddressed by current technologies. Intermediarieswill provide a solution for organizations by efficiently applyingspecialized knowledge to product value chains, aboveand beyond the costs they impose. More specifically, electronicbrokers will provide automated assistance for electroniccontracting through knowledge 0$ (1) the market, (2)requirements analysis, and {3) negotiation. Herein, we motivatethe need for an intermediary contract broke~ definerequirements of an automated broker and illustrated ourprototype, called APPLET DEALMAKEK with an examplefrom the domain of electronic software applet contracting.The prototype demonstrates how the automated applicationof contracting knowledge-especially contract restructuring-can assist users in deriving mutually beneficial deals.
CASE productivity perceptions of software engineering professionals Computer-aided software engineering (CASE) is moving into the problem-solving domain of the systems analyst. The authors undertook a study to investigate the various functional and behavioral aspects of CASE and determine the impact it has over manual methods of software engineering productivity.
Understanding the role of negotiation in distributed search among heterogeneous agents In our research, we explore the role of negotiation for conflict resolution in distributed search among heterogeneous and reusable agents. We present negotiated search, an algorithm that explicitly recognizes and exploits conflict to direct search activity across a set of agents. In negotiated search, loosely coupled agents interleave the tasks of 1) local search for a solution to some subproblem; 2) integration of local subproblem solutions into a shared solution; 3) information exchange to define and refine the shared search space of the agents; and 4) assessment and reassessment of emerging solutions. Negotiated search is applicable to diverse application areas and problem-solving environments. It requires only basic search operators and allows maximum flexibility in the distribution of those operators. These qualities make the algorithm particularly appropriate for the integration of heterogeneous agents into application systems. The algorithm is implemented in a multi-agent framework, TEAM, that provides the infrastructure required for communication and cooperation.
Distributed Intelligent Agents In Retsina, the authors have developed a distributed collection of software agents that cooperate asynchronously to perform goal-directed information retrieval and integration for supporting a variety of decision-making tasks. Examples for everyday organizational decision making and financial portfolio management demonstrate its effectiveness.
An organisation ontology for enterprise modeling: Preliminary concepts for linking structure and behaviour The paper presents our preliminary exploration into an organisation ontology for the TOVE enterprise model. The ontology puts forward a number of conceptualizations for modeling organisations: activities, agents, roles, positions, goals, communication, authority, commitment. Its primary focus has been in linking structure and behaviour through the concept of empowerment. Empowerment is the right of an organisation agent to perform status changing actions. This linkage is critical to the unification of enterprise models and their executability.
An exploratory contingency model of user participation and MIS use A model is proposed of the relationship between user participation and degree of MIS usage. The model has four dimensions: participation characteristics, system characteristics, system initiator, and the system development environment. Stages of the System Development Life Cycle are considered as a participation characteristics, task complexity as a system characteristics, and top management support and user attitudes as parts of the system development environment. The data are from a cross-sectional survey in Korea, covering 134 users of 77 different information systems in 32 business firms. The results of the analysis support the proposed model in general. Several implications of this for MIS managers are then discussed.
Integrating conflicting requirements in process modeling: a survey and research directions Requirements in process modeling have traditionally been collected separately for different business functions and then integrated into an overall specification. The recent orientation to a process perspective in managing business activities has emphasized early integration, by concurrently analyzing business processes and requirements. Accordingly, requirements analysis methodologies should take into account these new demands. In the paper, we discuss these new integration needs. Traditional methods for requirements integration from database design are analyzed and unfulfilled integration needs are highlighted. Then, other research fields are surveyed that deal with problems similar to integration and offer interesting results: recent developments in database design, software engineering and requirements reuse. Finally, we compare the different contributions and indicate open research directions.
Inside a software design team: knowledge acquisition, sharing, and integration
STATEMATE: a working environment for the development of complex reactive systems This paper provides a brief overview of the STATE MATE system, constructed over the past three years by i-Logix - r - ., Inc., and Ad Cad Ltd. STATEMATE is a graphical working en- vironment, intended for the specification, analysis, design and documentation of large and complex reactive systems, such as real-time embedded systems, control and communication sys- tems, and interactive software. It enables a user to prepare, analyze and debug diagrammatic, yet precise, descriptions of the system under development from three inter-related points of view, capturing, structure, functionality and behavior. These views are represented by three graphical languages, the most intricate of which is the language of statecharts used to depict reactive behavior over time. In addition to the use of state- charts, the main novelty of STATEMATE is in the fact that it 'understands' the entire descriptions perfectly, to the point of being able to analyze them for crucial dynamic properties, to carry out rigorous animated executions and simulations of the described system, and to create running code automatically. These features are invaluable when it comes to the quality and reliability of the final outcome.
The Mentor Project: Steps Toward Enterprise-Wide Workflow Management Enterprise-wide workflow management where workflows may span multiple organizational units require particular consideration of scalability, heterogeneity, and availability issues. The Mentor project which is introduced in this paper aims to reconcile a rigorous workflow specification method with a distributed middleware architecture as a step towards enterprise-wide solutions. The project uses the formalism of state and activity charts and a commercial tool, Statemate, for workflow specification. A first prototype of Mentor has been built which allows executing specifications in a distributed manner A major contribution of this paper is the method for transforming a centralized state chart spectfication into a form that is amenable to a distributed execution and to incorporate the necessary synchronization between different processing entities. Fault tolerance issues are addressed by coupling Mentor with the Tuxedo TP monitor.
Improvements to Platt's SMO Algorithm for SVM Classifier Design This article points out an important source of inefficiency in Platt's sequential minimal optimization (SMO) algorithm that is caused by the use of a single threshold value. Using clues from the KKT conditions for the dual problem, two threshold parameters are employed to derive modifications of SMO. These modified algorithms perform significantly faster than the original SMO on all benchmark data sets tried.
Argos: an automaton-based synchronous language Argos belongs to the family of synchronous languages, designed for programming reactive systems: Lustre (Proceedings of the 14th Symposium on Principles of Programming Languages, Munich, 1987; Proc. IEEE 79(9) (1999) 1305), Esterel (Sci. Comput. Programming 19(2) (1992) 87), Signal (Technical Report, IRISA Report 246, IRISA, Rennes, France, 1985). Argos is a set of operators that allow to combine Boolean Mealy machines, in a compositional way. It takes its origin in Statecharts (Sci. Comput. Programming 8 (1987) 231), but with the Argos operators, one can build only a subset of Statecharts, roughly those that do not make use of multi-level arrows. We explain the main motivations for the definition of Argos, and the main differences with Statecharts and their numerous semantics. We define the set of operators, give them a perfectly synchronous semantics in the sense of Esterel, and prove that it is compositional, with respect to the trace equivalence of Boolean Mealy machines. We give an overview of the work related to the definition and implementation of Argos (code generation, connection to verification tools, introduction of non-determinism, etc.). This paper also gives a set of guidelines for building an automaton-based, Statechart-like, yet perfectly synchronous, language.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.018498
0.017662
0.01534
0.014227
0.01347
0.01347
0.010424
0.006735
0.002966
0.000296
0.000003
0
0
0
Eve: a measurement-centric emulation environment for adaptive internet servers Emulation plays a central role in the performance evaluation, capacity planning, and workload characterization of servers and data centers. Emulation tools usually require developers to focus on mimicking application behavior as well as to deal with system-level details of managing the emulation. With the continuing increase in computing capacity and complexity, capturing the interactions between different parts of an emulation (e.g., clients' reactions to server reconfiguration) increases the complexity and overhead of emulation design. Furthermore, since the amount of measurement data can easily be huge, efficient data management is becoming a key requirement to the proper scalability of any emulation tool. In this paper, we propose Eve, an efficient emulation environment that provides rapid development of distributed and adaptive emulators. By incorporating in-path data processing and custom triggers into a distributed shared variable (DSV) core, Eve provides full and customizable control of how and when measurement data is moved from the source to the DSV, where the data is stored. Both functions simplify data management and minimize the overhead of frequent updates, thus enhancing the created emulator's scalability. They also simplify feedback monitoring and control when creating adaptive emulators. The capabilities of Eve are shown to allow emulation designers to focus on application behavior rather than on system-level details.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Cloud-Based Architectures for Auto-Scalable Web Geoportals towards the Cloudification of the GeoVITe Swiss Academic Geoportal. Cloud computing has redefined the way in which Spatial Data Infrastructures (SDI) and Web geoportals are designed, managed, and maintained. The cloudification of a geoportal represents the migration of a full-stack geoportal application to an internet-based private or public cloud. This work introduces two generic and open cloud-based architectures for auto-scalable Web geoportals, illustrated with the use case of the cloudification efforts of the Swiss academic geoportal GeoVITe. The presented cloud-based architectural designs for auto-scalable Web geoportals consider the most important functional and non-functional requirements and are adapted to both public and private clouds. The availability of such generic cloud-based architectures advances the cloudification of academic SDIs and geoportals.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A high level language for specifying graph based languages and their programming environments No abstract available.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Exploratory prototyping through the use of frames and production rules Exploratory prototypes allow examination and validation of the functionality of a software system under construction by observing the behavior of the system requirements brought out through an interpreter. In order to produce the exploratory prototype rapidly, a language must be available to provide freedom from implementation concerns and allow for a natural representation of the problem domain through inheritance hierarchies and exception handling mechanisms. For embedded systems the prototyping language must also allow for specification of the system as a set of concurrently executing and interacting processes. The language FRORL2, which uses frames and production rules to construct exploratory prototypes for embedded systems, is discussed
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Stability analysis of continuous-time systems with time-varying delay using new Lyapunov-Krasovskii functionals. This paper studies the stability of linear continuous-time systems with time-varying delay by employing new Lyapunov–Krasovskii functionals. Based on the new Lyapunov–Krasovskii functionals, more relaxed stability criteria are obtained. Firstly, in order to coordinate with the use of the third-order Bessel-Legendre inequality, a proper quadratic functional is constructed. Secondly, two couples of integral terms {∫t−htsx(s)ds,∫stx(s)ds} and {∫t−hMsx(s)ds,∫st−htx(s)ds} are involved in the integral functionals ∫t−htt(·)ds and ∫t−hMt−ht(·)ds, respectively, so that the coupling information between them can be fully utilized. Finally, two commonly-used numerical examples are given to demonstrate the effectiveness of the proposed method.
Two novel general summation inequalities to discrete-time systems with time-varying delay. This paper presents two novel general summation inequalities, respectively, in the upper and lower discrete regions. Thanks to the orthogonal polynomials defined in different inner spaces, various concrete single/multiple summation inequalities are obtained from the two general summation inequalities, which include almost all of the existing summation inequalities, e.g., the Jensen, the Wirtinger-based and the auxiliary function-based summation inequalities. Based on the new summation inequalities, a less conservative stability condition is derived for discrete-time systems with time-varying delay. Numerical examples are given to show the effectiveness of the proposed approach.
Novel Summation Inequalities and Their Applications to Stability Analysis for Systems With Time-Varying Delay. The inequality technique plays an important role in stability analysis for time-delay systems. This technical note presents a new sequence of novel summation inequalities by introducing some free matrices, which includes the newly-developed Wirtinger-based and free-matrix-based summation inequalities as special cases. Moreover, the idea can be easily extended to the multiple-summation-inequality case. Based on the proposed inequalities, relaxed stability conditions are obtained for systems with time-varying delay. Numerical examples are given to demonstrate the effectiveness of the proposed approach.
Stability criterion for delayed neural networks via Wirtinger-based multiple integral inequality. This brief provides an alternative way to reduce the conservativeness of the stability criterion for neural networks (NNs) with time-varying delays. The core is that a series of multiple integral terms are considered as a part of the Lyapunov-Krasovskii functional (LKF). In order to estimate the multiple integral terms in the derivative of the LKF, a multiple integral inequality, named Wirtinger-based multiple integral inequality (WMII), is proposed. This inequality includes some recent related results as its special cases. Based on the multiple integral forms of LKF and the WMII, a novel delay dependent stability criterion for NNs with time-varying delays is derived. The effectiveness of the established stability criterion is verified by an open example.
New approach to stability criteria for generalized neural networks with interval time-varying delays. This paper is concerned with the problem of delay-dependent stability of delayed generalized continuous neural networks, which include two classes of fundamental neural networks, i.e., static neural networks and local field neural networks, as their special cases. It is assumed that the state delay belongs to a given interval, which means that the lower bound of delay is not restricted to be zero. An improved integral inequality lemma is proposed to handle the cross-product terms occurred in derivative of constructed Lyapunov–Krasovskii functional. By using the new lemma and delay partitioning method, some less conservative stability criteria are obtained in terms of LMIs. Numerical examples are finally given to illustrate the effectiveness of the proposed method over the existing ones.
Formal Derivation of Strongly Correct Concurrent Programs. Summary  A method is described for deriving concurrent programs which are consistent with the problem specifications and free from deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary variables is also given. The applicability of the techniques presented is discussed through various examples; their use for verification purposes is illustrated as well.
Hypertext: An Introduction and Survey First Page of the Article
A field study of the software design process for large systems The problems of designing large software systems were studied through interviewing personnel from 17 large projects. A layered behavioral model is used to analyze how three of these problems—the thin spread of application domain knowledge, fluctuating and conflicting requirements, and communication bottlenecks and breakdowns—affected software productivity and quality through their impact on cognitive, social, and organizational processes.
Four dark corners of requirements engineering Research in requirements engineering has produced an extensive body of knowledge, but there are four areas in which the foundation of the discipline seems weak or obscure. This article shines some light in the "four dark corners," exposing problems and proposing solutions. We show that all descriptions involved in requirements engineering should be descriptions of the environment. We show that certain control information is necessary for sound requirements engineering, and we explain the close association between domain knowledge and refinement of requirements. Together these conclusions explain the precise nature of requirements, specifications, and domain knowledge, as well as the precise nature of the relationships among them. They establish minimum standards for what information should be represented in a requirements language. They also make it possible to determine exactly what it means for requirements engineering to be successfully completed. Categories and Subject Descriptors: D.2.1 (Software Engineering): Requirements/Specifica- tions—methodologies
Algorithms for drawing graphs: an annotated bibliography Several data presentation problems involve drawing graphs so that they are easy to read and understand. Examples include circuit schematics and software engineering diagrams. In this paper we present a bibliographic survey on algorithms whose goal is to produce aesthetically pleasing drawings of graphs. Research on this topic is spread over the broad spectrum of Computer Science. This bibliography constitutes an attempt to encompass both theoretical and application oriented papers from disparate areas.
Generating, integrating, and activating thesauri for concept-based document retrieval A blackboard-based document management system that uses a neural network spreading-activation algorithm which lets users traverse multiple thesauri is discussed. Guided by heuristics, the algorithm activates related terms in the thesauri and converges of the most pertinent concepts. The system provides two control modes: a browsing module and an activation module that determine the sequence of operations. With the browsing module, users have full control over which knowledge sources to browse and what terms to select. The system's query formation; the retrieving, ranking and selection of documents; and thesaurus activation are described.<>
Unifying correctness statements Partial, total and general correctness and further models of sequential computations differ in their treatment of finite, infinite and aborting executions. Algebras structure this diversity of models to avoid the repeated development of similar theories and to clarify their range of application. We introduce algebras that uniformly describe correctness statements, correctness calculi, pre-post specifications and loop refinement rules in five kinds of computation models. This extends previous work that unifies iteration, recursion and program transformations for some of these models. Our new description includes a relativised domain operation, which ignores parts of a computation, and represents bound functions for claims of termination by sequences of tests. We verify all results in Isabelle heavily using its automated theorem provers.
Notes on Nonrepetitive Graph Colouring. A vertex colouring of a graph is nonrepetitive on paths if there is no path upsilon(1), upsilon(2),...., upsilon(2t) such that upsilon(i) and upsilon(t+i) receive the same colour for all i = 1, 2,..., t. We determine the maximum density of a graph that admits a k-colouring that is nonrepetitive on paths. We prove that every graph has a subdivision that admits a 4-colouring that is nonrepetitive on paths. The best previous bound was 5. We also study colourings that are nonrepetitive on walks, and provide a conjecture that would imply that every graph with maximum degree Delta has a f (Delta)-colouring that is nonrepetitive on walks. We prove that every graph with treewidth k and maximum degree Delta has a O(k Delta)-colouring that is nonrepetitive on paths, and a O(k Delta(3))-colouring that is nonrepetitive on walks.
Generalised rely-guarantee concurrency: An algebraic foundation. The rely-guarantee technique allows one to reason compositionally about concurrent programs. To handle interference the technique makes use of rely and guarantee conditions, both of which are binary relations on states. A rely condition is an assumption that the environment performs only atomic steps satisfying the rely relation and a guarantee is a commitment that every atomic step the program makes satisfies the guarantee relation. In order to investigate rely-guarantee reasoning more generally, in this paper we allow interference to be represented by a process rather than a relation and hence derive more general rely-guarantee laws. The paper makes use of a weak conjunction operator between processes, which generalises a guarantee relation to a guarantee process, and introduces a rely quotient operator, which generalises a rely relation to a process. The paper focuses on the algebraic properties of the general rely-guarantee theory. The Jones-style rely-guarantee theory can be interpreted as a model of the general algebraic theory and hence the general laws presented here hold for that theory.
1.2
0.2
0.066667
0.022222
0.008333
0
0
0
0
0
0
0
0
0
Behavioural Constraints Using Events
The Object Flow Model: A Formal Framework for Describing the Dynamic Construction, Destruction and Interaction of Complex Objects This research complements active object-oriented database management systems by providing a formal, yet conceptually-natural model for complex object construction and destruction. The Object Flow Model (OFM), introduced in this paper, assumes an object-oriented database for the rich structural description of objects and for the specification of methods to manipulate objects. The OFM contributes a third component, the Object Flow Diagram (OFD), which provides a visual formalism to describe how multiple objects and events can actively invoke processing steps, how objects can become part of progressively more complex objects, and how complex objects can be picked apart. The OFD thus provides an invocation mechanism that is more general than a single message and a processing mechanism that may invoke multiple methods (so long as they apply to either the input or output objects). The development of the OFD was influenced by conceptual modeling languages and discrete event simulation languages and the formal semantics of the OFD is based on work in deductive databases.
Integrity Checking in a Logic-Oriented ER Model
The Notion of ``Classes of a Path'' in ER Schemas In Entity-Relationship (ER) modeling connection traps are a known problems. But the literature does not seem to have provided an adequate treatment of it. Moreover, it seems to be only a special case of a more fundamental problem of whether a piece of information can be represented by a database that is specified by an ER schema. To develop a systematic treatment for this problem, in this paper we suggest adopting a semiotic approach, which enables the separation of topological connections at the syntactic level and semantic connections, and an examination of the inter-relationships between them. Based on this, we propose and describe the notion of 'classes of a path' in an ER schema, and then indicate its implications to ER modeling.
Connections in acyclic hypergraphs We demonstrate a sense in which the equivalence between blocks (subgraphs without articulation points) and biconnected components (subgraphs in which there are two edge-disjoint paths between any pair of nodes) that holds in ordinary graph theory can be generalized to hypergraphs. The result has an interpretation for relational databases that the universal relations described by acyclic join dependencies are exactly those for which the connections among attributes are defined uniquely. We also exhibit a relationship between the process of Graham reduction (Graham, 1979) of hypergraphs and the process of tableau reduction (Aho, Sagiv and Ullman, 1979) that holds only for acyclic hypergraphs.
A simplied universal relation assumption and its properties One problem concerning the universal relation assumption is the inability of known methods to obtain a database scheme design in the general case, where the real-world constraints are given by a set of dependencies that includes embedded multivalued dependencies. We propose a simpler method of describing the real world, where constraints are given by functional dependencies and a single join dependency. The relationship between this method of defining the real world and the classical methods is exposed. We characterize in terms of hypergraphs those multivalued dependencies that are the consequence of a given join dependency. Also characterized in terms of hypergraphs are those join dependencies that are equivalent to a set of multivalued dependencies.
A distributed alternative to finite-state-machine specifications A specification technique, formally equivalent to finite-state machines, is offered as an alternative because it is inherently distributed and more comprehensible. When applied to modules whose complexity is dominated by control, the technique guides the analyst to an effective decomposition of complexity, encourages well-structured error handling, and offers an opportunity for parallel computation. When applied to distributed protocols, the technique provides a unique perspective and facilitates automatic detection of some classes of error. These applications are illustrated by a controller for a distributed telephone system and the full-duplex alternating-bit protocol for data communication. Several schemes are presented for executing the resulting specifications.
Software process modeling: principles of entity process models
A Conceptual Framework for Requirements Engineering. A framework for assessing research and practice in requirements engineering is proposed. The framework is used to survey state of the art research contributions and practice. The framework considers a task activity view of requirements, and elaborates different views of requirements engineering (RE) depending on the starting point of a system development. Another perspective is to analyse RE from different conceptions of products and their properties. RE research is examined within this framework and then placed in the context of how it extends current system development methods and systems analysis techniques.
A Methodology for Developing Distributed Programs A methodology, different from the existing ones, for constructing distributed programs is presented. It is based on the well-known idea of developing distributed programs via synchronous and centralized programs. The distinguishing features of the methodology are: 1) specification include process structure information and distributed programs are developed taking this information into account, 2) a new class of programs, called PPSA's, is used in the development process, and 3) a transformational approach is suggested to solve the problems inherent in the method of developing distributed programs through synchronous and centralized programs. The methodology is illustrated with an example.
Information system design methodology.
Optimal, efficient, recursive edge detection filters The design of an optimal, efficient, infinite-impulse-response (IIR) edge detection filter is described. J. Canny (1986) approached the problem by formulating three criteria designed in any edge detection filter: good detection, good localization, and low spurious response. He maximized the product of the first two criteria while keeping the spurious response criterion constant. Using the variational approach, he derived a set of finite extent step edge detection filters corresponding to various values of the spurious response criterion, approximating the filters by the first derivative of a Gaussian. A more direct approach is described in this paper. The three criteria are formulated as appropriate for a filter of infinite impulse response, and the calculus of variations is used to optimize the composite criteria. Although the filter derived is also well approximated by first derivative of a Gaussian, a superior recursively implemented approximation is achieved directly. The approximating filter is separable into two linear filters operating in two orthogonal directions allowing for parallel edge detection processing. The implementation is very simple and computationally efficient
Multitemporal Hyperspectral Image Compression. The compression of multitemporal hyperspectral imagery is considered, wherein the encoder uses a reference image to effectuate temporal decorrelation for the coding of the current image. Both linear prediction and a spectral concatenation of images are explored to this end. Experimental results demonstrate that, when there are few changes between two images, the gain in rate-distortion performance...
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.051647
0.050118
0.050055
0.050055
0.025661
0.017107
0.003359
0.000067
0.000027
0.000013
0.000005
0
0
0
DOODLE: a visual language for object-oriented databases In this paper we introduce DOODLE, a new visual and declarative language for object-oriented databases. The main principle behind the language is that it is possible to display and query the database with arbitrary pictures. We allow the user to tailor the display of the data to suit the application at hand or her preferences. We want the user-defined visualizations to be stored in the database, and the language to express all kinds of visual manipulations. For extendibility reasons, the language is object-oriented. The semantics of the language is given by a well-known deductive query language for object-oriented databases. We hope that the formal basis of our language will contribute to the theoretical study of database visualizations and visual query languages, a subject that we believe is of great interest, but largely left unexplored.
Multi-paradigm query interface to an object-oriented database The object-oriented paradigm has a number of widely recognised strengths when applied to data management, but the increased complexity of actual systems compared with their relational predecessors often means that such databases are less readily accessible to nonprogrammers than relational systems. A number of proposals have been made for textual, form-based and graph-based query interfaces to object-oriented databases, but it is clear that a single approach cannot be considered to be the best, given the wide range of potential user groups, application domains and tasks. The paper presents a query interface to an object-oriented database which supports alternative user-level query paradigms in a fully integrated environment, thereby enabling different categories of user to select a preferred interface paradigm from a list of options. Furthermore, the interface enables users to examine queries written in one query interface using any of the other interface paradigms, which is useful for sharing queries in the multi-paradigm context, and for helping users familiar with one approach to learn another. The system has been prototyped using the ADAM object-oriented database system, and an experimental comparison of different interaction modes has been conducted.
Experimental investigation of the utility of data structure and E-R diagrams in database query
A Hypergraph-based Framework for Visual Interaction with Databases The advent of graphical workstations has led to a new generation of interaction tools in database systems, where the use of graphics greatly enhances the quality of the interaction. Yet, Visual Query Languages present some limitations, deriving partly from their own paradigm and partly from the available technology. One of the basic drawbacks is the lack of formalization, in contrast to the well-established traditional languages. In this paper we propose a theoretical framework for visual interaction with databases, having a particular kind of hypergraph, the Structure Modeling Hypergraph (SMH), as a representation tool, able to capture the features of existing data models. SMHs profit from the basic property of diagrams while overcoming their limitations. Notable characteristics of SMHs are: uniform and unified representation of intensional and extensional aspects of databases, direct representation of containment relationships, and immediate applicability of direct manipulation primitives. SMHs are not a new data model but a new representation language that provides the syntactic rules for describing the structuring mechanisms of data models. SMHs can be queried by formal systems closed under queries.
Visualizing queries and querying visualizations this paper, we describe the approach to visual display and manipulation of databases that we have beeninvestigating at the University of Toronto for the past few years. We present an overview and retrospectiveof the G
A simplied universal relation assumption and its properties One problem concerning the universal relation assumption is the inability of known methods to obtain a database scheme design in the general case, where the real-world constraints are given by a set of dependencies that includes embedded multivalued dependencies. We propose a simpler method of describing the real world, where constraints are given by functional dependencies and a single join dependency. The relationship between this method of defining the real world and the classical methods is exposed. We characterize in terms of hypergraphs those multivalued dependencies that are the consequence of a given join dependency. Also characterized in terms of hypergraphs are those join dependencies that are equivalent to a set of multivalued dependencies.
ENIAM: a more complete conceptual schema language
Synthesizing object life cycles from business process models. Unified modeling language (UML) activity diagrams can model the flow of stateful business objects among activities, implicitly specifying the life cycles of those objects. The actual object life cycles are typically expressed in UML state machines. The implicit life cycles in UML activity diagrams need to be discovered in order to derive the actual object life cycles or to check the consistency with an existing life cycle. This paper presents an automated approach for synthesizing a UML state machine modeling the life cycle of an object that occurs in different states in a UML activity diagram. The generated state machines can contain parallelism, loops, and cross-synchronization. The approach makes life cycles that have been modeled implicitly in activity diagrams explicit. The synthesis approach has been implemented using a graph transformation tool and has been applied in several case studies.
Visualization of structural information: automatic drawing of compound digraphs An automatic method for drawing compound digraphs that contain both inclusion edges and adjacency edges are presented. In the method vertices are drawn as rectangles (areas for texts, images, etc.), inclusion edges by the geometric inclusion among the rectangles, and adjacency edges by arrows connecting them. Readability elements such as drawing conventions and rules are identified, and a heuristic algorithm to generate readable diagrams is developed. Several applications are shown to demonstrate the effectiveness of the algorithm. The utilization of curves to improve the quality of diagrams is investigated. A possible set of command primitives for progressively organizing structures within this graph formalism is discussed. The computational time for the applications shows that the algorithm achieves satisfactory performance
Supporting systems development by capturing deliberations during requirements engineering Support for various stakeholders involved in software projects (designers, maintenance personnel, project managers and executives, end users) can be provided by capturing the history about design decisions in the early stages of the system's development life cycle in a structured manner. Much of this knowledge, which is called the process knowledge, involving the deliberation on alternative requirements and design decisions, is lost in the course of designing and changing such systems. Using an empirical study of problem-solving behavior of individual and groups of information systems professionals, a conceptual model called REMAP (representation and maintenance of process knowledge) that relates process knowledge to the objects that are created during the requirements engineering process has been developed. A prototype environment that provides assistance to the various stakeholders involved in the design and management of large systems has been implemented.
Supporting Multi-Perspective Requirements Engineering Supporting collaborating requirements engineers as theyindependently construct a specification is highly desirable.Here, we show how collaborative requirements engineeringcan be supported using a planner, domain abstractions, andautomated decision science techniques. In particular, weshow how requirements conflict resolution can be assistedthrough a combination of multi-agent multicriteria optimizationand heuristic resolution generation. We then summarizethe use of our tool to...
Building testable software This paper examines a connection between well known specification, design, implementation methodologies and test-design which appears not to have been previously well-formulated. We refer to the fact that the use of finite state machines (FSMs) in each development phase (specification, design, implementation and testing) is well known and documented. However, despite the fact that much of this work is more than twenty years old, there appears to be no detailed proposal for a consistent FSM-based approach be used across all development phases for other than very specific application types. We suggest that the adoption of a systematic FSM-based approach across all phases, including implementation, may allow a number of major problems in software development to be either eliminated or simplified. In this way, testable, highly dependable systems can be produced. In such systems, behaviour is explicitly defined, built, and tested using both functional and structural methods. Undesired behaviours can be found and eliminated, and abnormal or unexpected input explicitly handled. We discuss the issues we consider to be involved, and the benefits which we expect may be gained. We also identify those areas where further work appears to be required.
Verification of Reactive Systems Using DisCo and PVS
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.130778
0.16
0.16
0.106667
0.001334
0.000581
0.000075
0.000002
0
0
0
0
0
0
The Impact of Supporting Organizational Knowledge Management through a Corporate Portal on Employees and Business Processes Due to corporate portal playing a major role on organizational knowledge management KM, this study was conducted to assess the impact of supporting KM processes through a corporate portal on business processes and employees at an academic institution. This paper specifically assesses the impact of knowledge acquisition, knowledge conversion, knowledge application and knowledge protection on business processes' effectiveness, efficiency and innovation, and employees' learning, adaptability, and job satisfaction. Findings suggest that the ending KM process, knowledge application, produces the highest impact on business processes and employees. First, supporting knowledge application through a corporate portal was positively associated with business processes' effectiveness and innovation and employees' learning, adaptability, and job satisfaction. Second, supporting knowledge conversion was positively associated with business processes' effectiveness and employees' learning, whereas supporting knowledge protection was positively associated with business processes' effectiveness and efficiency but negatively associated with employees' learning. Finally, supporting knowledge acquisition was positively associated with only business processes' innovation.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Augmented Lyapunov-Krasovskii Functional Approach to Stability of Discrete Systems With Time-Varying Delays. This paper investigates the problem of delay-dependent stability for discrete-time systems with time-varying delays. A novel augmented Lyapunov-Krasovskii functional is proposed in deriving stability criteria in which the feasible region is enhanced. Also, an improved summation inequality is developed and applied to find the lower bound of summation inequalities. Via two numerical examples, improved results will be shown by comparing with maximum delay bounds.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Viewpoint consistency in ODP Open Distributed Processing (ODP) is a joint ITU/ISO standardisation framework for constructing distributed systems in a multi-vendor environment. Central to the ODP approach is the use of viewpoints for specification and design. Inherent in any viewpoint approach is the need to check and manage the consistency of viewpoints. In previous work we have described techniques for consistency checking, refinement, and translation between viewpoint specifications, in particular for LOTOS and Z/Object-Z. Here we present an overview of our work, motivated by a case study combining these techniques in order to show consistency between viewpoints specified in LOTOS and Object-Z.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An Application of Structural Modeling and Automated Reasoning to Real-Time Systems Design This paper presents an application of structural modeling and automated reasoning as a software development environment for real-time systems. This application satisfies two major requirements for such an environment: (1) to synthesize an absolutely correct program and, (2) to increase software productivity. The real-time systems, which consist of concurrent programs, are described by a Prolog based concurrent object-oriented language, called MENDEL/87. As a typical concurrent program consists of two parts: functional and synchronization parts; the functional part in the reusable component to be registered in a library will be generated by a structural modeling through the use of structuring functions with respect to data flows. The synchronization part will be synthesized from temporal logic specifications by the use of an automated reasoning mechanism. This paper also describes the MENDELS ZONE implemented on a Prolog machine, which is the working base for the presented application method.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Refinement of Actions in Circus, This paper presents refinement laws to support the development of actions in Circus, a combination of Z and CSP adequate to specify the data structures and behavioural aspects of concurrent systems. In this language, systems are characterised as a set of processes; each process is a unit that encapsulates state and reactive behaviour defhed by actions. Previously, we have addressed the issue of refining processes. Here, we are concerned with the actions that compose the behaviour of such processes, and that may involve both Z and CSP constructs. We present a number of useful laws, and a case study that illustrates their application.
A Circus Semantics for Ravenscar Protected Objects The Ravenscar profile is a subset of the Ada 95 tasking model: it is certifiable, deterministic, supports schedulability analysis, and meets tight memory constraints and performance requirements. A central feature of Ravenscar is the use of protected objects to ensure mutually exclusive access to shared data. We give a semantics to protected objects using Circus, a combination of Z and CSP, and prove several important properties; this is the first time that these properties have been verified. Interestingly, all the proofs are conducted in Z, even the ones concerning reactive behaviour.
Refinement in Circus We describe refinement in Circus, a concurrent specification language that integrates imperative CSP, Z, and the refinement calculus. Each Circus process has a state and accompanying actions that define both the internal state transitions and the changes in control flow that occur during execution. We define the meaning of refinement of processes and their actions, and propose a sound data refinement technique for process refinement. Refinement laws for CSP and Z are directly relevant and applicable to Circus, but our focus here is on new laws for processes that integrate state and control. We give some new results about the distribution of data refinement through the combinators of CSP. We illustrate our ideas with the development of a distributed system of cooperating processes from a centralised specification.
ZRC --- A Refinement Calculus for Z The fact that Z is a specification language only, with no associated program development method, is a widely recognised problem. As an answer to that, we present ZRC, a refinement calculus based on Morgan's work that incorporates the Z notation and follows its style and conventions. This work builds upon existing refinement techniques for Z, but distinguishes itself mainly in that ZRC is completely formalised. In this paper, we explain how programs can be derived from Z specifications using ZRC. We present ZRC-L, the language of our calculus, and its conversion laws, which are concerned with the transformation of Z schemas into programs of this language. Moreover, we present the weakest precondition semantics of ZRC-L, which is the basis for the derivation of the laws of ZRC. More than a refinement calculus, ZRC is a theory of refinement for Z.
Stepwise refinement of parallel algorithms The refinement calculus and the action system formalism are combined to provide a uniform method for constructing parallel and distributed algorithms by stepwise refinement. It is shown that the sequencial refinement calculus can be used as such for most of the derivation steps. Parallelism is introduced during the derivation by refinement of atomicity. The approach is applied to the derivation of a parallel version of the Gaussian elimination method for solving simultaneous linear equation systems.
B#: toward a synthesis between Z and B In this paper, I present some ideas and principles underlying the realization of a new project called B#. This project follows the main ideas and principles already at work in B, but it also follows a number of older concepts developed in Z. In B#, the intent is to have a formal system to be used to model complex system in general, not only software systems.
Types and invariants in the refinement calculus A rigorous treatment of types as sets is given for the refinement calculus, a method of imperative program development. It is simple, supports existing practice, casts new light on type-checking, and suggests generalisations that might be of practical benefit.
Terms with unbounded demonic and angelic nondeterminacy We show how to introduce demonic and angelic nondeterminacy into the term language of each type in typical programming or specification language. For each type we introduce (binary infix) operators @? and @? on terms of the type, corresponding to demonic and angelic nondeterminacy, respectively. We generalise these operators to accommodate unbounded nondeterminacy. We axiomatise the operators and derive their important properties. We show that a suitable model for nondeterminacy is the free completely distributive complete lattice over a poset, and we use this to show that our axiomatisation is sound. In the process, we exhibit a strong relationship between nondeterminacy and free lattices that has not hitherto been evident.
Appraising Fairness in Languages for Distributed Programming The relations among various languages and models for distributed computation and various possible definitions of fairness are considered. Natural semantic criteria are presented which an acceptable notion of fairness should satisfy. These are then used to demonstrate differences among the basic models, the added power of the fairness notion, and the sensitivity of the fairness notion to irrelevant semantic interleavings of independent operations. These results are used to show that from the considerable variety of commonly used possibilities, only strong process fairness is appropriate for CSP if these criteria are adopted. We also show that under these criteria, none of the commonly used notions of fairness are fully acceptable for a model with an n-way synchronization mechanism. The notion of fairness most often mentioned for Ada is shown to be fully acceptable. For a model with nonblocking send operations, some variants of common fairness definitions are appraised, and two are shown to satisfy the suggested criteria.
An image multiresolution representation for lossless and lossy compression We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bit-shift operations. During its calculation, the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropy-coding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for progressive transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate versus distortion performance is comparable to those of the most efficient lossy compression methods.
Two-dimensional PCA: a new approach to appearance-based face representation and recognition. In this paper, a new technique coined two-dimensional principal component analysis (2DPCA) is developed for image representation. As opposed to PCA, 2DPCA is based on 2D image matrices rather than 1D vectors so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices, and its eigenvectors are derived for image feature extraction. To test 2DPCA and evaluate its performance, a series of experiments were performed on three face image databases: ORL, AR, and Yale face databases. The recognition rate across all trials was higher using 2DPCA than PCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2DPCA than PCA.
Qualitative Action Systems An extension to action systems is presented facilitating the modeling of continuous behavior in the discrete domain. The original action system formalism has been developed by Back et al. in order to describe parallel and distributed computations of discrete systems, i.e. systems with discrete state space and discrete control. In order to cope with hybrid systems, i.e. systems with continuous evolution and discrete control, two extensions have been proposed: hybrid action systems and continuous action systems. Both use differential equations (relations) to describe continuous evolution. Our version of action systems takes an alternative approach by adding a level of abstraction: continuous behavior is modeled by Qualitative Differential Equations that are the preferred choice when it comes to specifying abstract and possibly non-deterministic requirements of continuous behavior. Because their solutions are transition systems, all evolutions in our qualitative action systems are discrete. Based on hybrid action systems, we develop a new theory of qualitative action systems and discuss how we have applied such models in the context of automated test-case generation for hybrid systems.
Vivid: A framework for heterogeneous problem solving We introduce Vivid, a domain-independent framework for mechanized heterogeneous reasoning that combines diagrammatic and symbolic representation and inference. The framework is presented in the form of a family of denotational proof languages (DPLs). We present novel formal structures, called named system states, that are specifically designed for modeling potentially underdetermined diagrams. These structures allow us to deal with incomplete information, a pervasive feature of heterogeneous problem solving. We introduce a notion of attribute interpretations that enables us to interpret first-order relational signatures into named system states, and develop a formal semantic framework based on 3-valued logic. We extend the assumption-base semantics of DPLs to accommodate diagrammatic reasoning by introducing general inference mechanisms for the valid extraction of information from diagrams, and for the incorporation of sentential information into diagrams. A rigorous big-step operational semantics is given, on the basis of which we prove that the framework is sound. We present examples of particular instances of Vivid in order to solve a series of problems, and discuss related work.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.053227
0.05
0.030871
0.011026
0.000849
0.000299
0.000128
0.000036
0.000007
0
0
0
0
0
A theory for execution-time derivation in real-time programs We provide an abstract command language for real-time programs and outline how a partial correctness semantics can be used to compute execution times. The notions of a timed command, refinement of a timed command, the command traversal condition, and the worst-case and best-case execution time of a command are formally introduced and investigated with the help of an underlying weakest liberal precondition semantics. The central result is a theory for the computation of worst-case and best-case execution times from the underlying semantics based on supremum and infimum calculations. The framework is applied to the analysis of a message transmitter program and its implementation.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
SICLIC: A Simple Inter-Color Lossless Image Coder Many applications require high quality color images. In order to alleviate storage space and transmission time, while preserving high quality, these images are losslessly compressed. Most of the image compression algorithms treat the color image, usually in RGB format, as a set of independent gray scale images. SICLIC is a novel inter-color coding algorithm based on a LOCO-like algorithm. It combines the simplicity of Golomb-Rice coding with the potential of context models, in both intra-color and inter-color encoding. It also supports intra-color and inter-color alphabet extension, in order to reduce the redundancy of code. SICLIC attains compression ratios superior to those obtained with most of the state-of-the-art compression algorithms and achieves compression ratios very close to those of Inter-Band CALIC, with much lower complexity. With arithmetic coding, SICLIC attains better compression than Inter-Band CALIC.
Fast Constant Division Routines When there is no division circuit available, the arithmetical function of division is normally performed by a library subroutine. The library subroutine normally allows both the divisor and the dividend to be variables, and requires the execution of hundreds of assembly instructions. This correspondence provides a fast algorithm for performing the integer division of a variable by a predetermined divisor. Based upon this algorithm, an efficient division routine has been constructed for each odd divisor up to 55. These routines may be implemented in assembly languages, in microcodes, and in special-purpose circuits.
Context-based predictive lossless coding for hyperspectral images A cluster-based lossless compression algorithm for hyper- spectral images is presented. Clustering is carried out on the original data according to the vectors spectra, and it is used to set up multiple contexts for predictive lossless coding. Low- order prediction is performed using adaptive Linear Least Squares (LLS) estimation which exploits the additional in- formation provided by clustering. Prediction errors are then entropy-coded using an adaptive arithmetic coder also driven by data clusters. The proposed scheme is used to losslessly code a set of AVIRIS hyperspectral images. Comparisons with the JPEG- LS, JPEG-2000 and the clustered DPCM coding algorithms are given.
Relations between entropy and error probability The relation between the entropy of a discrete random variable and the minimum attainable probability of error made in guessing its value is examined. While Fano's inequality provides a tight lower bound on the error probability in terms of the entropy, the present authors derive a converse result-a tight upper bound on the minimal error probability in terms of the entropy. Both bounds are sharp, and can draw a relation, as well, between the error probability for the maximum a posteriori (MAP) rule, and the conditional entropy (equivocation), which is a useful uncertainty measure in several applications. Combining this relation and the classical channel coding theorem, the authors present a channel coding theorem for the equivocation which, unlike the channel coding theorem for error probability, is meaningful at all rates. This theorem is proved directly for DMCs, and from this proof it is further concluded that for R&ges;C the equivocation achieves its minimal value of R-C at the rate of n1/2 where n is the block length
Generalized kraft inequality and arithmetic coding Algorithms for encoding and decoding finite strings over a finite alphabet are described. The coding operations are arithmetic involving rational numbers li as parameters such that ∑i2−li≤2−ε. This coding technique requires no blocking, and the per-symbol length of the encoded string approaches the associated entropy within ε. The coding speed is comparable to that of conventional coding methods.
Coding of sources with two-sided geometric distributions and unknown parameters Lossless compression is studied for a countably infinite alphabet source with an unknown, off-centered, two-sided geometric (TSG) distribution, which is a commonly used statistical model for image prediction residuals. We demonstrate that arithmetic coding based on a simple strategy of model adaptation, essentially attains the theoretical lower bound to the universal coding redundancy associated with this model. We then focus on more practical codes for the TSG model, that operate on a symbol-by-symbol basis, and study the problem of adaptively selecting a code from a given discrete family. By taking advantage of the structure of the optimum Huffman tree for a known TSG distribution, which enables simple calculation of the codeword of every given source symbol, an efficient adaptive strategy is derived
Run-length encodings (Corresp.) First Page of the Article
Efficient Run-Length Encoding of Binary Sources with Unknown Statistics Abstract We present a new binary entropy coder of the Golomb family, with an adaptation strategy that is nearly ,optimum ,in a ,maximum-likelihood sense. This new encoder can be implemented efficiently in practice, since uses only integer arithmetic and no divisions. That way, the proposed encoder has a complexity nearly identical to that of popular adaptive Rice coders. However, whereas Golomb-Rice coders have an excess rate with respect to the source entropy of up to 4.2% for binary sources with unknown statistics, the proposed encoder has an excess rate of less than 2%.
Three dimensional discrete wavelet transform with deduced number of lifting steps This report reduces the total number of lifting steps in a three-dimensional (3D) double lifting discrete wavelet transform (DWT), which has been widely applied for analyzing volumetric medical images. The lifting steps are necessary components in a DWT. Since calculation in a lifting step must wait for a result of former step, cascading many lifting steps brings about increase of delay from input to output. We decrease the total number of lifting steps introducing 3D memory accessing for the implementation of low delay 3D DWT. We also maintain compatibility with the conventional 5/3 DWT defined by JPEG 2000 international standard for utilization of its software and hardware resources. Finally, the total number of lifting steps and rounding operations were reduced to 67 % and 33 %, respectively. It was observed that total amount of errors due to rounding operations in the lifting steps was also reduced.
Lossless Compression of Hyperspectral Imagery via Clustered Differential Pulse Code Modulation with Removal of Local Spectral Outliers A high-order clustered differential pulse code modulation method with removal of local spectral outliers (C-DPCM-RLSO) is proposed for the lossless compression of hyperspectral images. By adaptively removing the local spectral outliers, the C-DPCM-RLSO method improves the prediction accuracy of the high-order regression predictor and reduces the residuals between the predicted and the original images. The experiment on a set of the NASA Airborne Visible Infrared Imaging Spectrometer (AVIRIS) test images show that the C-DPCM-RLSO method has a comparable average compression gain but a much reduced execution time as compared with the previous lossless methods.
Specifications are (preferably) executable The validation of software specifications with respect to explicit and implicit user requirements is extremely difficult. To ease the validation task and to give users immediate feedback of the behavior of the future software it was suggested to make specifications executable. However, Hayes and Jones (Hayes, Jones 89) argue that executable specifications should be avoided because executability can restrict the expressiveness of specification languages, and can adversely affect implementations. In this paper I will argue for executable specifications by showing that non-executable formal specifications can be made executable on almost the same level of abstraction and without essentially changing their structure. No new algorithms have to be introduced to get executability. In many cases the combination of property-orientation and search results in specifications based on the generate-and-test approach. Furthermore, I will demonstrate that declarative specification languages allow to combine high expressiveness and executability.
The weakest precondition calculus: Recursion and duality An extension of Dijkstra's guarded command language is studied, including unbounded demonic choice and a backtrack operator. We consider three orderings on this language: a refinement ordering defined by Back, a new deadlock ordering, and an approximation ordering of Nelson. The deadlock ordering is in between the two other orderings. All operators are monotonic in Nelson's ordering, but backtracking is not monotonic in Back's ordering and sequential composition is not monotonic for the deadlock ordering. At first sight recursion can only be added using Nelson's ordering. We show that, under certain circumstances, least fixed points for non-monotonic functions can be obtained by iteration from the least element. This permits the addition of recursion even using Back's ordering or the deadlock ordering in a fully compositional way. In order to give a semantic characterization of the three orderings that relates initial states to possible outcomes of the computation, the relation between predicate transformers and discrete power domains is studied. We consider (two versions of) the Smyth power domain and the Egli-Milner power domain.
Matrices or node-link diagrams: which visual representation is better for visualising connectivity models? Adjacency matrices or DSMs (design structure matrices) and node-link diagrams are both visual representations of graphs, which are a common form of data in many disciplines. DSMs are used throughout the engineering community for various applications, such as process modelling or change prediction. However, outside this community, DSMs (and other matrix-based representations of graphs) are rarely applied and node-link diagrams are very popular. This paper will examine, which representation is more suitable for visualising graphs. For this purpose, several user experiments were conducted that aimed to answer this research question in the context of product models used, for example in engineering, but the results can be generalised to other applications. These experiments identify key factors on the readability of graph visualisations and confirm work on comparisons of different representations. This study widens the scope of readability comparisons between node-link and matrix-based representations by introducing new user tasks and replacing simulated, undirected graphs with directed ones employing real-world semantics.
The Use of Machine Learning Algorithms in Recommender Systems: A Systematic Review. •A survey of machine learning (ML) algorithms in recommender systems (RSs) is provided.•The surveyed studies are classified in different RS categories.•The studies are classified based on the types of ML algorithms and application domains.•The studies are also analyzed according to main and alternative performance metrics.•LNCS and EWSA are the main sources of studies in this research field.
1.067969
0.066723
0.066667
0.026703
0.016693
0.011129
0.002073
0.000027
0.000015
0.000004
0
0
0
0
Dissipativity analysis of neural networks with time-varying delays This paper focuses on the problem of delay-dependent dissipativity analysis for a class of neural networks with time-varying delays. A free-matrix-based inequality method is developed by introducing a set of slack variables, which can be optimized via existing convex optimization algorithms. Then, by employing Lyapunov functional approach, sufficient conditions are derived to guarantee that the considered neural networks are strictly ( Q , S , R ) -γ-dissipative. The conditions are presented in terms of linear matrix inequalities and can be readily checked and solved. Numerical examples are finally provided to demonstrate the effectiveness and advantages of the proposed new design techniques.
Further results on passivity analysis for uncertain neural networks with discrete and distributed delays. The problem of passivity analysis of uncertain neural networks (UNNs) with discrete and distributed delay is considered. By constructing a suitable augmented Lyapunov-Krasovskii functional(LKF) and combing a novel integral inequality with convex approach to estimate the derivative of the proposed LKF, improved sufficient conditions to guarantee passivity of the concerned neural networks are established with the framework of linear matrix inequalities(LMIs), which can be solved easily by various efficient convex optimization algorithms. Two numerical examples are provided to demonstrate the enhancement of feasible region of the proposed criteria by the comparison of maximum allowable delay bounds.
Dissipativity analysis for neural networks with two-delay components using an extended reciprocally convex matrix inequality. This paper focuses on the problem of strictly (Q,S,R)-γ-dissipativity analysis for neural networks with two-delay components. Based on the dynamic delay interval method, a Lyapunov–Krasovskii functional is constructed. By solving its self-positive definite and derivative negative definite conditions via an extended reciprocally convex matrix inequality, several new sufficient conditions that guarantee the neural networks strictly (Q,S,R)-γ-dissipative are derived. Furthermore, the dissipativity analysis of neural networks with two-delay components is extended to the stability analysis. Finally, two numerical examples are employed to illustrate the advantages of the proposed method.
Wirtinger-based multiple integral inequality for stability of time-delay systems. ABSTRACTNote that the conservatism of the delay-dependent stability criteria can be reduced by increasing the integral terms in Lyapunov–Krasovskii functional (LKF). This brief revisits the stability problem for a class of linear time-delay systems via multiple integral approach. The novelty of this brief lies in that a Wirtinger-based multiple integral inequality is employed to estimate the derivative of a class of LKF with multiple integral terms. Based on these innovations, a new delay-dependent stability criterion is derived in terms of linear matrix inequalities. Two numerical examples are exploited to demonstrate the effectiveness and superiority of the proposed method.
Robust passivity analysis of neural networks with discrete and distributed delays. This paper focuses on the problem of passivity of neural networks in the presence of discrete and distributed delay. By constructing an augmented Lyapunov functional and combining a new integral inequality with the reciprocally convex approach to estimate the derivative of the Lyapunov–Krasovskii functional, sufficient conditions are established to ensure the passivity of the considered neural networks, in which some useful information on the neuron activation function ignored in the existing literature is taken into account. Three numerical examples are provided to demonstrate the effectiveness and the merits of the proposed method.
Stability and dissipativity analysis of static neural networks with interval time-varying delay This paper focuses on the problems of stability and dissipativity analysis for static neural networks (NNs) with interval time-varying delay. A new augmented Lyapunov–Krasovskii functional is firstly constructed, in which the information on the activation function is taken fully into account. Then, by employing a Wirtinger-based inequality to estimate the derivative of Lyapunov–Krasovskii functional, an improved stability criterion is derived for the considered neural networks. The result is extended to dissipativity analysis and a sufficient condition is established to assure the neural networks strictly dissipative. Two numerical examples are provided to demonstrate the effectiveness and the advantages of the proposed method.
A new looped-functional for stability analysis of sampled-data systems. In this paper, a new two-sided looped-functional is introduced for stability analysis of sampled-data systems. The functional fully utilizes the information on both the intervals x(t) to x(tk) and x(t) to x(tk+1). Based on the two-sided functional, an improved stability condition is derived in the form of linear matrix inequality (LMI). Numerical examples show that the result computed by the presented condition approximates nearly the theoretical bound (bound obtained by eigenvalue analysis) and outperforms substantially others in the existing literature.
Robust Stabilization for Uncertain Saturated Time-Delay Systems: A Distributed-Delay-Dependent Polytopic Approach. This technical note investigates the robust stabilization problem for uncertain linear systems with discrete and distributed delays under saturated state feedback. Different from the existing approaches, a distributed-delay-dependent polytopic approach is proposed in this technical note, and the saturation nonlinearity is represented as the convex combination of state feedback and auxiliary distributed-delay feedback. Then, by incorporating an appropriate augmented Lyapunov-Krasovskii (L-K) functional and some integral inequalities, the less conservative stabilization and robust stabilization conditions are proposed in terms of linear matrix inequalities (LMIs). The effectiveness and reduced conservatism of the proposed conditions are illustrated by numerical examples.
State estimation for uncertain Markovian jump neural networks with mixed delays. This paper investigates the problem of state estimation for uncertain Markovian jump neural networks(NNs) with additive time-varying discrete delay components and distributed delay. By constructing a novel Lyapunov–Krasovskii function with multiple integral terms and using an improved inequality, several sufficient conditions are derived. Some improved conditions are formulated in terms of a set of linear matrix inequalities (LMIs), under which the estimation error system is globally exponentially stable in the mean square sense. Some numerical examples are provided to demonstrate the effectiveness of the proposed results.
Object-oriented modeling and design
Synthetic texturing using digital filters
Supporting the negotiation life cycle This article describes processes, products, and perspectives of the negotiation life cycle andapplies this framework to show: (1) how different life cycle phases have different supportrequirements, and (2) how existing tools differ in their level of support for these various phases.We illustrate the use of the framework by showing how it can guide the selection of negotiationsupport tools for a specific negotiation context.
Non-Repetitive Tilings In 1906 Axel Thue showed how to construct an innite non-repetitive (or square- free) word on an alphabet of size 3. Since then this result has been rediscovered many times and extended in many ways. We present a two-dimensional version of this result. We show how to construct a rectangular tiling of the plane using 5 symbols which has the property that lines of tiles which are horizontal, vertical or have slope +1 or 1 contain no repetitions. As part of the construction we introduce a new type of word, one that is non-repetitive up to mod k ,w hich is of interest in itself. We also indicate how our results might be extended to higher dimensions.
Information hiding in medical images: a robust medical image watermarking system for E-healthcare Abstract Electronic transmission of the medical images is one of the primary requirements in a typical Electronic-Healthcare (E-Healthcare) system. However this transmission could be liable to hackers who may modify the whole medical image or only a part of it during transit. To guarantee the integrity of a medical image, digital watermarking is being used. This paper presents two different watermarking algorithms for medical images in transform domain. In first technique, a digital watermark and Electronic Patients Record (EPR) have been embedded in both regions; Region of Interest (ROI) and Region of Non-Interest (RONI). In second technique, Region of Interest (ROI) is kept untouched for tele-diagnosis purpose and Region of Non-Interest (RONI) is used to hide the digital watermark and EPR. In either algorithm 8 × 8 block based Discrete Cosine Transform (DCT) has been used. In each 8 × 8 block two DCT coefficients are selected and their magnitudes are compared for embedding the watermark/EPR. The selected coefficients are modified by using a threshold for embedding bit a ‘0’ or bit ‘1’ of the watermark/EPR. The proposed techniques have been found robust not only to singular attacks but also to hybrid attacks. Comparison results viz-a - viz payload and robustness show that the proposed techniques perform better than some existing state of art techniques. As such the proposed algorithms could be useful for e-healthcare systems.
1.023992
0.026667
0.017037
0.012222
0.008495
0.006829
0.002236
0.000667
0.000079
0
0
0
0
0
A relational view of activities for systems analysis and design
The Metaview system for many specification environments The use of metasystems, which can automatically generate the major parts of a software-development environment, for computer-aided software (CASE) engineering is discussed. One such system, called Metaview, is considered. Environment definition and tool development using Metaview are examined.<>
CASE: reliability engineering for information systems Classical and formal methods of information and software systems development are reviewed. The use of computer-aided software engineering (CASE) is discussed. These automated environments and tools make it practical and economical to use formal system-development methods. Their features, tools, and adaptability are discussed. The opportunities that CASE environments provide to use analysis techniques to assess the reliability of information systems before they are implemented and to audit a completed system against its design and maintain the system description as accurate documentation are examined.<>
A Requirements Engineering Methodology for Real-Time Processing Requirements This paper describes a methodology for the generation of software requirements for large, real-time unmanned weapons systems. It describes what needs to be done, how to evaluate the intermediate products, and how to use automated aids to improve the quality of the product. An example is provided to illustrate the methodology steps and their products and the benefits. The results of some experimental applications are summarized.
Teamwork Support in a Knowledge-Based Information Systems Environment Development assistance for interactive database applications (DAIDA) is an experimental environment for the knowledge-assisted development and maintenance of database-intensive information systems from object-oriented requirements and specifications. Within the DAIDA framework, an approach to integrate different tasks encountered in software projects via a conceptual modeling strategy has been developed. Emphasis is put on integrating the semantics of the software development domain with aspects of group work, on social strategies to negotiate problems by argumentation, and on assigning responsibilities for task fulfillment by way of contracting. The implementation of a prototype is demonstrated with a sample session.
Building reliable interactive information systems User software engineering (USE) is a methodology, with supporting tools, for the specification, design, and implementation of interactive information systems. With the USE approach, the user interface is formally specified with augmented state transition diagrams, and the operations may be formally specified with preconditions and postconditions. The USE state transition diagrams may be directly executed with the application development tool RAPID/USE. RAPID/USE and its associated tool RAPSUM create and analyze logging information that is useful for system testing, and for evaluation and modification of the user interface. The authors briefly describe the USE transition diagrams and the formal specification approach, and show how these tools and techniques aid in the creation of reliable interactive information systems.
The transformation schema: An extension of the data flow diagram to represent control and timing The data flow diagram has been extensively used to model the data transformation aspects of proposed systems. However, previous definitions of the data flow diagram have not provided a comprehensive way to represent the interaction between the timing and control aspects of a system and its data transformation behavior. This paper describes an extension of the data flow diagram called the transformation schema. The transformation schema provides a notation and formation rules for building a comprehensive system. model, and a set of execution rules to allow prediction of the behavior over time of a system modeled in this way. The notation and formation rules allow depiction of a system as a network of potentially concurrent “centers of activity” (transformations), and of data repositories (stores), linked by communication paths (flows). The execution rules provide a qualitative prediction rather than a quantitative one, describing the acceptance of inputs and the production of outputs by the transformations but not input and output values. The transformation schema permits the creation and evaluation of two different types of system models. In the essential (requirements) model, the schema is used to represent a virtual machine with infinite resources. The elements of the schema depict idealized processing and memory components. In the implementation model, the schema is used to represent a real machine with limited resources, and the results of the execution predict the behavior of an implementation of requirements. The transformations of the schema can depict software running on digital processors, hard-wired digital or analog circuits, and so on, and the stores of the schema can depict disk files, tables in memory, and so on.
A Total System Design Framework First Page of the Article
On formal requirements modeling languages: RML revisited No abstract available.
Object-oriented modeling and design
A metamodel approach for the management of multiple models and the translation of schemes A metamodel approach is proposed as a framework for the definition of different data models and the management of translations of schemes from one model to another. This notion is useful in an environment for the support of the design and development of information systems, since different data models can be used and schemes referring to different models need to be exchanged. The approach is based on the observation that the constructs used in the various models can be classified into a limited set of basic types, such as lexical type, abstract type, aggregation, function. It follows that the translations of schemes can be specified on the basis of translations of the involved types of constructs: this is effectively performed by means of a procedural language and a number of predefined modules that express the standard translations between the basic constructs.
A disciplined approach to office analysis To define office requirements, the authors propose a disciplined language in which a good portion of conventionality and known and concrete concepts are associated with a minimum formalism. The language can be used in analyzing an office for the purpose of designing a computer-based office support system. Given the language, the designer can organize information obtained by people working in the office and state the office requirements that will be the basis for developing a suitable system. The features of the language are illustrated and its morphology and syntax are explained.
An Efficient Reordering Prediction-Based Lossless Compression Algorithm for Hyperspectral Images In this letter, we propose an efficient lossless compression algorithm for hyperspectral images; it is based on an adaptive spectral band reordering algorithm and an adaptive backward previous closest neighbor (PCN) prediction with error feedback. The adaptive spectral band reordering algorithm has some strong points. It can adaptively determine the range of spectral bands needed to be reordered, ...
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.223036
0.03786
0.024784
0.002089
0.000466
0.000121
0.000023
0.00001
0.000005
0.000002
0
0
0
0
Transformational Design of Real-Time Systems Part I: From Requirements to Program Specifications .   In the two parts of this article a transformational approach to the design of distributed real-time systems is presented. The starting point are global requirements formulated in a subset of Duration Calculus called implementables and the target are programs in an OCCAM dialect PL. In the first part we show how the level of program specifications represented by a language SL can be reached. SL combines regular expressions with ideas from action systems and with time conditions, and can express the distributed architecture of the implementation. While Duration Calculus is state-based, SL is event-based, and the switch between these two worlds is a prominent step in the transformation from implementables to SL. Both parts of the transformational calculus rely on the mixed term techniques by which syntax pieces of two languages are mixed in a semantically coherent manner. In the first part of the article mixed terms between implementables and SL and in the second part of the article mixed terms between SL and PL are used. The approach is illustrated by the example of a computer controlled gas burner.
Transformational Design of Real-Time Systems. Part II: From Program Specifications to Programs .   In the two parts of this article we present a transformational approach to the design of real-time systems. The overall starting point are requirements formulated in a subset of Duration Calculus called implementables and the target are programs in an OCCAM dialect PL. In the first part we have shown how the level of program specifications represented by a language SL can be reached. SL combines regular expressions with action systems and time conditions. In this part we show the transformation from SL to PL. It relies on the ‘Expansion strategy’ by which certain transformations can be applied in an almost automatic fashion. In many places transformations consist of algebraic reasoning by laws for operations on programs. Both parts of our transformational calculus rely on the mixed term techniques in which syntax pieces of two languages are mixed in a semantically coherent manner. In the first part of the article mixed terms between implementables and SL have been used, in the present part mixed terms between SL and PL are used. The approach is illustrated by the example of a computer controlled gas burner from part I again.
Interfaces between Languages for Communicating Systems A system design typically involves various languages, each one describing the system at a different level of abstraction. To achieve a trustworthy design, it is essential that the interfaces between these laguages are conceptually well understood and mathematically sound. Recent research in semantics attempts to clarify and structure the development of such interfaces.
The Production Cell: A Verified Real-Time System This paper applies and refines the ProCoS approach to transformational design of real-time systems to a benchmark case study, the Karlsruhe production cell [10, 9]. We start by formalizing the informal requirements of [10, 9] in Duration Calculus and end with a distributed controller architecture where all components are specified in the program specification language SL^time [18]. Novel is the full treatment of hybrid system components in a parametric and thus reusable way.
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
Stepwise Refinement of Distributed Systems, Models, Formalisms, Correctness, REX Workshop, Mook, The Netherlands, May 29 - June 2, 1989, Proceedings
Higher Order Software A Methodology for Defining Software The key to software reliability is to design, develop, and manage software with a formalized methodology which can be used by computer scientists and applications engineers to describe and communicate interfaces between systems. These interfaces include: software to software; software to other systems; software to management; as well as discipline to discipline within the complete software development process. The formal methodology of Higher Order Software (HOS), specifically aimed toward large-scale multiprogrammed/multiprocessor systems, is dedicated to systems reliability. With six axioms as the basis, a given system and all of its interfaces is defined as if it were one complete and consistent computable system. Some of the derived theorems provide for: reconfiguration of real-time multiprogrammed processes, communication between functions, and prevention of data and timing conflicts.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Distributed data structures in Linda A distributed data structure is a data structure that can be manipulated by many parallel processes simultaneously. Distributed data structures are the natural complement to parallel program structures, where a parallel program (for our purposes) is one that is made up of many simultaneously active, communicating processes. Distributed data structures are impossible in most parallel programming languages, but they are supported in the parallel language Linda and they are central to Linda programming style. We outline Linda, then discuss some distributed data structures that have arisen in Linda programming experiments to date. Our intent is neither to discuss the design of the Linda system nor the performance of Linda programs, though we do comment on both topics; we are concerned instead with a few of the simpler and more basic techniques made possible by a language model that, we argue, is subtly but fundamentally different in its implications from most others.This material is based upon work supported by the National Science Foundation under Grant No. MCS-8303905. Jerry Leichter is supported by a Digital Equipment Corporation Graduate Engineering Education Program fellowship.
Probabilistic predicate transformers Probabilistic predicates generalize standard predicates over a state space; with probabilistic predicate transformers one thus reasons about imperative programs in terms of probabilistic pre- and postconditions. Probabilistic healthiness conditions generalize the standard ones, characterizing “real” probabilistic programs, and are based on a connection with an underlying relational model for probabilistic execution; in both contexts demonic nondeterminism coexists with probabilistic choice. With the healthiness conditions, the associated weakest-precondition calculus seems suitable for exploring the rigorous derivation of small probabilistic programs.
A Lossless Embedded Compression Using Significant Bit Truncation for HD Video Coding Increasing the image size of a video sequence aggravates the memory bandwidth problem of a video coding system. Despite many embedded compression (EC) algorithms proposed to overcome this problem, no lossless EC algorithm able to handle high-definition (HD) size video sequences has been proposed thus far. In this paper, a lossless EC algorithm for HD video sequences and related hardware architecture is proposed. The proposed algorithm consists of two steps. The first is a hierarchical prediction method based on pixel averaging and copying. The second step involves significant bit truncation (SBT) which encodes prediction errors in a group with the same number of bits so that the multiple prediction errors are decoded in a clock cycle. The theoretical lower bound of the compression ratio of the SBT coding was also derived. Experimental results have shown a 60% reduction of memory bandwidth on average. Hardware implementation results have shown that a throughput of 14.2 pixels/cycle can be achieved with 36 K gates, which is sufficient to handle HD-size video sequences in real time.
Visual Query Systems for Databases: A Survey Visual query systems (VQSs) are query systems for databases that use visual representations to depict the domain of interest and express related requests. VQSs can be seen as an evolution of query languages adopted into database management systems; they are designed to improve the effectiveness of the human–computer communication. Thus, their most important features are those that determine the nature of the human–computer dialogue. In order to survey and compare existing VQSs used for querying traditional databases, we first introduce a classification based on such features, namely the adopted visual representations and the interaction strategies. We then identify several user types and match the VQS classes against them, in order to understand which kind of system may be suitable for each kind of user. We also report usability experiments which support our claims. Finally, some of the most important open problems in the VQS area are described.
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.11
0.1
0.033214
0.01
0.000417
0.000058
0.000003
0
0
0
0
0
0
0
HEVC-based lossless compression of Whole Slide pathology images This paper proposes an HEVC-based method for lossless compression of Whole Slide pathology Images (WSIs). Based on the observation that WSIs usually feature a high number of edges and multidirectional patterns due to the great variety of cellular structures and tissues depicted, we combine the advantages of sample-by-sample differential pulse code modulation (SbS-DPCM) and edge prediction into the intra coding process. The objective is to enhance the prediction performance where strong edge information is encountered. This paper also proposes an implementation of the decoding process that maintains the block-wise coding structure of HEVC when SbS-DPCM and edge prediction are employed. Experimental results on various WSIs show that the proposed method attains average bit-rate savings of 7.67%.
Fast Intra-Prediction For Lossless Coding Of Screen Content In Hevc The High Efficiency Video Coding (HEVC) standard achieves higher encoding efficiency than previous standards such as H.264/AVC. One key contributor to this improvement is the intra-prediction method that supports a large number of prediction directions at a cost of high computational complexity. Within the context of mobile devices with limited power and computational capabilities, reductions on encoding complexity are important; particularly to encode new data formats such as screen content sequences. This paper presents a novel intra-prediction method for lossless coding of this type of sequences. The method employs one of three possible intra-prediction modes to encode each prediction block. These three modes predict strong edges, model different directional patterns and generate smooth surfaces. The proposed method provides a decrease of up to 53.84% in the HEVC intra-prediction lossless encoding time, with average bit-rate reductions of 7.05% for a variety of screen content sequences.
Lossless Compression of Medical Images Using 3-D Predictors. This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3-D-MRP, is based on the principle of minimum rate predictors (MRPs), which is one of the state-of-the-art lossless compression technologies presented in the data compression literature. The main features of the proposed method inclu...
LOCO-I: a low complexity, context-based, lossless image compression algorithm LOCO-I (low complexity lossless compression for images) is a novel lossless compression algorithm for continuous-tone images which combines the simplicity of Huffman coding with the compression potential of context models, thus “enjoying the best of both worlds.” The algorithm is based on a simple fixed context model, which approaches the capability of the more complex universal context modeling techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with a collection of (context-conditioned) Huffman codes, which is realized with an adaptive, symbol-wise, Golomb-Rice code. LOCO-I attains, in one pass, and without recourse to the higher complexity arithmetic coders, compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. In fact, LOCO-I is being considered by the ISO committee as a replacement for the current lossless standard in low-complexity applications
Extending the CCSDS Recommendation for Image Data Compression for Remote Sensing Scenarios This paper presents prominent extensions that have been proposed for the Consultative Committee for Space Data Systems Recommendation for Image Data Compression (CCSDS-122-B-1). Thanks to the proposed extensions, the Recommendation gains several important featured advantages: It allows any number of spatial wavelet decomposition levels; it provides scalability by quality, position, resolution, and...
Relations between entropy and error probability The relation between the entropy of a discrete random variable and the minimum attainable probability of error made in guessing its value is examined. While Fano's inequality provides a tight lower bound on the error probability in terms of the entropy, the present authors derive a converse result-a tight upper bound on the minimal error probability in terms of the entropy. Both bounds are sharp, and can draw a relation, as well, between the error probability for the maximum a posteriori (MAP) rule, and the conditional entropy (equivocation), which is a useful uncertainty measure in several applications. Combining this relation and the classical channel coding theorem, the authors present a channel coding theorem for the equivocation which, unlike the channel coding theorem for error probability, is meaningful at all rates. This theorem is proved directly for DMCs, and from this proof it is further concluded that for R&ges;C the equivocation achieves its minimal value of R-C at the rate of n1/2 where n is the block length
Coding of sources with two-sided geometric distributions and unknown parameters Lossless compression is studied for a countably infinite alphabet source with an unknown, off-centered, two-sided geometric (TSG) distribution, which is a commonly used statistical model for image prediction residuals. We demonstrate that arithmetic coding based on a simple strategy of model adaptation, essentially attains the theoretical lower bound to the universal coding redundancy associated with this model. We then focus on more practical codes for the TSG model, that operate on a symbol-by-symbol basis, and study the problem of adaptively selecting a code from a given discrete family. By taking advantage of the structure of the optimum Huffman tree for a known TSG distribution, which enables simple calculation of the codeword of every given source symbol, an efficient adaptive strategy is derived
Optimized onboard lossless and near-lossless compression of hyperspectral data using CALIC We propose a new lossless and near-lossless compression algorithm for hyperspectral images based on context-based adaptive lossless image coding (CALIC). Specifically, we propose a novel multiband spectral predictor, along with optimized model parameters and optimization thresholds. The resulting algorithm is suitable for compression of data in band-interleaved-by-line format; its performance evaluation on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data shows that it outperforms 3-D-CALIC as well as other state-of-the-art compression algorithms.
Human-computer interface development: concepts and systems for its management Human-computer interface management, from a computer science viewpoint, focuses on the process of developing quality human-computer interfaces, including their representation, design, implementation, execution, evaluation, and maintenance. This survey presents important concepts of interface management: dialogue independence, structural modeling, representation, interactive tools, rapid prototyping, development methodologies, and control structures. Dialogue independence is the keystone concept upon which all the other concepts depend. It is a characteristic that separates design of the interface from design of the computational component of an application system so that modifications in either tend not to cause changes in the other. The role of a dialogue developer, whose main purpose is to create quality interfaces, is a direct result of the dialogue independence concept. Structural models of the human-computer interface serve as frameworks for understanding the elements of interfaces and for guiding the dialogue developer in their construction. Representation of the human-computer interface is accomplished by a variety of notational schemes for describing the interface. Numerous kinds of interactive tools for human-computer interface development free the dialogue developer from much of the tedium of "coding" dialogue. The early ability to observe behavior of the interface—and indeed that of the whole application system—provided by rapid prototyping increases communication among system designers, implementers, evaluators, and end-users. Methodologies for interactive system development consider interface management to be an integral part of the overall development process and give emphasis to evaluation in the development life cycle. Finally, several types of control structures govern how sequencing among dialogue and computational components is designed and executed. Numerous systems for human-computer interface management are presented to illustrate these concepts.
The Depth And Width Of Local Minima In Discrete Solution Spaces Heuristic search techniques such as simulated annealing and tabu search require ''tuning'' of parameters (i.e., the cooling schedule in simulated annealing, and the tabu list length in tabu search), to achieve optimum performance. In order for a user to anticipate the best choice of parameters, thus avoiding extensive experimentation, a better understanding of the solution space of the problem to be solved is needed. Two functions of the solution space, the maximum depth and the maximum width of local minima are discussed here, and sharp bounds on the value of these functions are given for the 0-1 knapsack problem and the cardinality set covering problem.
Reduction: a method of proving properties of parallel programs When proving that a parallel program has a given property it is often convenient to assume that a statement is indivisible, i.e. that the statement cannot be interleaved with the rest of the program. Here sufficient conditions are obtained to show that the assumption that a statement is indivisible can be relaxed and still preserve properties such as halting. Thus correctness proofs of a parallel system can often be greatly simplified.
Making Distortions Comprehensible This paper discusses visual information representation from the perspective of human comprehension. The distortion viewing paradigm is an appropriate focus for this discussion as its motivation has always been to create more understandable displays. While these techniques are becoming increasingly popular for exploring images that are larger than the available screen space, in fact users sometimes report confusion and disorientation. We provide an overview of structural changes made in response to this phenomenon and examine methods for incorporating visual cues based on human perceptual skills.
LANSF: a protocol modelling environment and its implementation SUMMARY LANSF is a software package that was originally designed as a tool to investigate the behaviour of medium access control (MAC) level protocols. These protocols form an interesting class of distributed computations: timing of events is the key factor in them. The protocol definition language of LANSF is based on C, and protocols are specified (programmed) as collections of communicating, interrupt-driven processes. These specifications are executable: an event-driven emulator of MAC-level communication phenomena forms the foundation of the implementation. Some tools for debugging, testing, and validation of protocol specifications are provided. We present key features of LANSF at the syntactic level, comment informally on the semantics of these features, and highlight some implementation issues. A complete example of a LANSF application is discussed in the Appendix.
Cognitive Relaying With Transceiver Hardware Impairments Under Interference Constraints. In this letter, we analyze the performance of cognitive amplify-and-forward multirelay networks with active direct link in the presence of relay transceiver hardware impairments. Considering distortion noises on both interference and main data links, we derive tight closed-form outage probability expressions and their asymptotic behavior for partial relay selection (PRS) and opportunistic relay se...
1.076444
0.08
0.066667
0.00139
0.00019
0.000021
0.000009
0
0
0
0
0
0
0
Tuning between Exponential Functions and Zones for Membership Functions Selection in Voronoi-Based Zoning for Handwritten Character Recognition In Handwritten Character Recognition, zoning is rigtly considered as one of the most effective feature extraction techniques. In the past, many zoning methods have been proposed, based on static and dynamic zoning design strategies. Notwithstanding, little attention has been paid so far to the role of function-zone membership functions, that define the way in which a feature influences different zones of the pattern. In this paper the effectiveness of membership functions for zoning-based classification is investigated. For the purpose, a useful representation of zoning methods based on Voronoi Diagram is adopted and several membership functions are considered, according to abstract -- , ranked- and measurement-levels strategies. Furthermore, a new class of membership functions with adaptive capabilities is introduced and a real-coded genetic algorithm is proposed to determine both the optimal zoning and the adaptive membership functions most profitable for a given classification problem. The experimental tests, carried out in the field of handwritten digit recognition, show the superiority of adaptive membership functions compared to traditional functions, whatever zoning method is used.
Fuzzy-Zoning-Based Classification for Handwritten Characters In zoning-based classification, a membership function defines the way a feature influences the different zones of the zoning method. This paper presents a new class of membership functions, which are called fuzzy-membership functions (FMFs), for zoning-based classification. These FMFs can be easily adapted to the specific characteristics of a classification problem in order to maximize classification performance. In this research, a real-coded genetic algorithm is presented to find, in a single optimization procedure, the optimal FMF, together with the optimal zoning described by Voronoi tessellation. The experimental results, which are carried out in the field of handwritten digit and character recognition, indicate that optimal FMF performs better than other membership functions based on abstract-level, ranked-level, and measurement-level weighting models, which can be found in the literature.
Voronoi-Based Zoning Design by Multi-objective Genetic Optimization This paper presents a new approach to optimal zoning design. The approach uses a multi-objective genetic algorithm to define, in a unique process, the optimal number of zones of the zoning method along with the optimal zones, defined through Voronoi diagrams. The experimental tests, carried out in the field of handwritten digit recognition, show the superiority of new approach with respect to traditional dynamic approaches for zoning design, based on single-objective optimization techniques.
Analysis of Membership Functions for Voronoi-Based Classification This paper addresses the problem of membership function selection for zoning-based classification. Different types of membership functions are considered based on abstract-level, ranked-level and measurement-level models and their effectiveness is estimated under different Voronoi-based zoning methods. The experimental tests, carried out in the field of hand-written numeral recognition, show that the best results are obtained when measurement-level models based on exponential models are used as membership functions.
Class-Oriented Recognizer Design by Weighting Local Decisions This paper presents a new technique for the design of class-oriented recognizer. For each recognizer a genetic technique is used to determine the weights to balance, in an optimal way, the local decisions obtained from the analysis by parts of the patterns of the specific class. The experimental results, that have been obtained in the field of hand-written numeral and character recognition, demonstrate the superiority of the new technique with respect to other traditional approaches.
Design of a neural network character recognizer for a touch terminal We describe a system which can recognize digits and uppercase letters handprinted on a touch terminal. A character is input as a sequence of [ x(t), y(t) ] coordinates, subjected to very simple preprocessing, and then classified by a trainable neural network. The classifier is analogous to “time delay neural networks” previously applied to speech recognition. The network was trained on a set of 12,000 digits and uppercase letters, from approximately 250 different writers, and tested on 2500 such characters from other writers. Classification accuracy exceeded 96% on the test examples.
Techniques for automatically correcting words in text Research aimed at correcting words in text has focused on three progressively more difficult problems:(1) nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction. In response to the first problem, efficient pattern-matching and n-gram analysis techniques have been developed for detecting strings that do not appear in a given word list. In response to the second problem, a variety of general and application-specific spelling correction techniques have been developed. Some of them were based on detailed studies of spelling error patterns. In response to the third problem, a few experiments using natural-language-processing tools or statistical-language models have been carried out. This article surveys documented findings on spelling error patterns, provides descriptions of various nonword detection and isolated-word error correction techniques, reviews the state of the art of context-dependent word correction techniques, and discusses research issues related to all three areas of automatic error correction in text.
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
Reexamining the cluster hypothesis: scatter/gather on retrieval results
Specification and verification of concurrent systems in CESAR The aim of this paper is to illustrate by an example, the alternating bit protocol, the use of CESAR, an interactive system for aiding the design of distributed applications.
Animating Conceptual Graphs This paper addresses operational aspects of conceptual graph systems. This paper is an attempt to formalize operations within a conceptual graph system by using conceptual graphs themselves to describe the mechanism. We outline a unifying approach that can integrate the notions of a fact base, type definitions, actor definitions, messages, and the assertion and retraction of graphs. Our approach formalizes the notion of type expansion and actor definitions, and in the process also formalizes the notion for any sort of formal assertion in a conceptual graph system. We introduce definitions as concept types called assertional types which are animated through a concept type called an assertional event. We illustrate the assertion of a type definition, a nested definition and an actor definition, using one extended example. We believe this mechanism has immediate and far-reaching value in offering a self-contained, yet animate conceptual graph system architecture.
Specification Diagrams for Actor Systems Specification diagrams (SD's) are a novel form of graphical notation for specifying open distributed object systems. The design goal is to define notation for specifying message-passing behavior that is expressive, intuitively understandable, and that has formal semantic underpinnings. The notation generalizes informal notations such as UML's Sequence Diagrams and broadens their applicability to later in the design cycle. Specification diagrams differ from existing actor and process algebra presentations in that they are not executable per se&semi; instead, like logics, they are inherently more biased toward specification. In this paper we rigorously define the language syntax and semantics and give examples that show the expressiveness of the language, how properties of specifications may be asserted diagrammatically, and how it is possible to reason rigorously and modularly about specification diagrams.
A Survey on the Flexibility Requirements Related to Business Processes and Modeling Artifacts In competitive and evolving environments only organizations which can manage complexity and can respond to rapid change in an informed manner can gain a competitive advantage During the early 90's, workflow technologies offered a transversal integration capacity to the enterprise applications. Today, to "integrate" enterprise applications -and the activities they support- into business processes is not sufficient. The architecture of this integration should also be flexible. Enterprise requirements highlight flexible and adaptive processes whose execution can evolve (i) according to situations that cannot always be prescribed, and/or (ii) according to business changes (organizational, process improvement, strategic ...). More recent works highlight requirements in term of flexible and adaptive workflows, whose execution can evolve according to situations that cannot always be prescribed. This paper presents the state of the art for flexible business process management systems and criteria for comparing them.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.060864
0.05872
0.0584
0.051063
0.035966
0.002608
0.001256
0
0
0
0
0
0
0
Visualization, Band Ordering And Compression Of Hyperspectral Images Air-borne and space-borne acquired hyperspectral images are used to recognize objects and to classify materials on the surface of the earth. The state of the art compressor for lossless compression of hyperspectral images is the Spectral oriented Least SQuares (SLSQ) compressor (see [1-7]). In this paper we discuss hyperspectral image compression: we show how to visualize each band of a hyperspectral image and how this visualization suggests that an appropriate band ordering can lead to improvements in the compression process. In particular, we consider two important distance measures for band ordering: Pearson's Correlation and Bhattacharyya distance, and report on experimental results achieved by a Java-based implementation of SLSQ.
Lossless compression of hyperspectral images: Look-up tables with varying degrees of confidence State-of-the-art algorithms LUT and LAIS-LUT, proposed for lossless compression of hyperspectral images, exploit high spectral correlations in these images, and use look-up tables to perform predictions. However, there are cases where their predictions are not accurate. In this work we also use look-up tables, but give these tables different degrees of confidence, based on the local variations of the scaling factor. Our results are highly satisfactory and outperform both LUT and LAIS-LUT methods.
Compression Of Multidimensional Images Using Jpeg2000 JPEG2000 Part 2 supports the use of Multicomponent transforms (MCTs) to decorrelate multicomponent images along the component direction. Such point transforms can be performed on arbitrary subsets of components, known as "component collections." These Part 2 extensions have been used for compressing 3-D images in applications such as medical imaging and remote sensing. It is widely believed that the MCT extensions are applicable only to 3-D data. In this letter, we demonstrate their use for compressing N-D datasets for any N >= 3.
Progressive distributed coding of multispectral images We present in this paper a novel distributed coding scheme for lossless and progressive compression of multispectral images. The main strategy of this new scheme is to explore data redundancies at the decoder in order to design a lightweight yet very efficient encoder suitable for onboard applications during acquisition of multispectral image. A sequence of increasing resolution layers is encoded and transmitted successively until the original image can be losslessly reconstructed from all layers. We assume that the decoder with abundant resources is able to perform adaptive region-based predictor estimation to capture spatially varying spectral correlation with the knowledge of lower-resolution layers, thus generate high quality side information for decoding the higher-resolution layer. Progressive transmission enables the spectral correlation to be refined successively, resulting in gradually improved decoding performance of higher-resolution layers as more data are decoded. Simulations have been carried out to demonstrate that the proposed scheme, with innovative combination of low complexity encoding, lossless compression and progressive coding, can achieve competitive performance comparing with high complexity state-of-the-art 3-D DPCM technique.
Low-Complexity Compression Method for Hyperspectral Images Based on Distributed Source Coding. In this letter, we propose a low-complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme for hyperspectral images. First, the DCT was applied to the hyperspectral images. Then, set-partitioning-based approach was utilized to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. Third, low-density pa...
Regression Wavelet Analysis for Lossless Coding of Remote-Sensing Data. A novel wavelet-based scheme to increase coefficient independence in hyperspectral images is introduced for lossless coding. The proposed regression wavelet analysis (RWA) uses multivariate regression to exploit the relationships among wavelet-transformed components. It builds on our previous nonlinear schemes that estimate each coefficient from neighbor coefficients. Specifically, RWA performs a ...
Lossless Compression of Hyperspectral Images Using a Quantized Index to Lookup Tables We propose an enhancement to the algorithm for lossless compression of hyperspectral images using lookup tables (LUTs). The original LUT method searched the previous band for a pixel of equal value to the pixel colocalized with the one to be predicted. The pixel in the same position as the obtained pixel in the current band is used as a predictor. LUTs were used to speed up the search. The LUT method has also been extended into a method called Locally Averaged Interband Scaling (LAIS)-LUT that uses two LUTs per band. One of the two LUT predictors that is the closest one to the LAIS estimate is chosen as the predictor for the current pixel. We propose the uniform quantization of the colocated pixels before using them for indexing the LUTs. The use of quantization reduces the size of the LUTs by an order of magnitude. The results show that the proposed method outperforms previous methods; a 3% increase in compression efficiency was observed compared to the current state-of-the-art method, LAIS-LUT.
Hyperspectral Image Compression Employing a Model of Anomalous Pixels We propose a new lossy compression algorithm for hyperspectral images, which is based on the spectral Karhunen- Loève transform, followed by spatial JPEG 2000, which employs a model of anomalous pixels during the compression process. Re- sults on Airborne Visible/Infrared Imaging Spectrometer scenes show that the new algorithm provides better rate-distortion per- formance, as well as improved anomaly detection performance, with respect to the state of the art. Index Terms—Anomaly detection, discrete wavelet transform (DWT), hyperspectral data, JPEG 2000, Karhunen-Loève trans- form (KLT), lossy compression, Reed-Xiaoli (RX) algorithm, wavelet.
Relations between entropy and error probability The relation between the entropy of a discrete random variable and the minimum attainable probability of error made in guessing its value is examined. While Fano's inequality provides a tight lower bound on the error probability in terms of the entropy, the present authors derive a converse result-a tight upper bound on the minimal error probability in terms of the entropy. Both bounds are sharp, and can draw a relation, as well, between the error probability for the maximum a posteriori (MAP) rule, and the conditional entropy (equivocation), which is a useful uncertainty measure in several applications. Combining this relation and the classical channel coding theorem, the authors present a channel coding theorem for the equivocation which, unlike the channel coding theorem for error probability, is meaningful at all rates. This theorem is proved directly for DMCs, and from this proof it is further concluded that for R&ges;C the equivocation achieves its minimal value of R-C at the rate of n1/2 where n is the block length
Scikit-learn: Machine Learning in Python Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net.
Reusing analogous components Using formal specifications to represent software components facilitates the determination of reusability because they more precisely characterize the functionality of the software, and the well-defined syntax makes processing amenable to automation. This paper presents an approach, based on formal methods, to the search, retrieval, and modification of reusable software components. From a two-tiered hierarchy of reusable software components, the existing components that are analogous to the query specification are retrieved from the hierarchy. The specification for an analogous retrieved component is compared to the query specification to determine what changes need to be applied to the corresponding program component in order to make it satisfy the query specification.
Book Review: Verification of Sequential and Concurrent Programs by Krzysztof R. Apt and Ernst-Riidiger Olderog (Springer-Verlag New York, 1997)
The specification logic nuZ This paper introduces a wide-spectrum specification logic nu Z. The minimal core logic is extended to a more expressive specification logic which includes a schema calculus similar (but not equivalent) to Z, new additional schema operators, and extensions to programming and program development logics.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.029946
0.031524
0.028571
0.015762
0.012019
0.004762
0.001375
0.000161
0.000004
0
0
0
0
0
The Fast Evaluation Strategy for Evolvable Hardware An evolutionary algorithm implemented in hardware is expected to operate much faster than the equivalent software implementation. However, this may not be true for slow fitness evaluation applications. This paper introduces a fast evolutionary algorithm (FEA) that does not evaluate all new individuals, thus operating faster for slow fitness evaluation applications. Results of a hardware implementation of this algorithm are presented that show the real time advantages of such systems for slow fitness evaluation applications. Results are presented for six optimisation functions and for image compression hardware.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Formal Methods: Theory Becoming Practice This paper gives a tutorial introduction to the ideas behind system development using the B-Method. Properly handled, the crucial relationship between requirements and formal model leads to systems that are correct by construction. Some industrial successes are outlined.
Reverse engineering concurrent programs using formal modelling and analysis We describe a formally based approach to reverse engineering programs with concurrent constructs. This has been a topic which has not previously been addressed, yet is required for safety critical systems and many others. To achieve this we have used a transformation based approach which has been successful in coping with sequential systems. To accommodate concurrency we have extended the core kernel language (WSL) and introduce and then prove new transformations. We then describe preliminary results. The novel aspects of this work are the application of formal program transformations to the maintenance of concurrent software, and the use of modern type theory and type checking/proof tools to extend our existing method and the tool to new domains, namely concurrency and safety critical systems.
Visual syntax does matter: improving the cognitive effectiveness of the i* visual notation Goal-oriented modelling is one of the most important research developments in the requirements engineering (RE) field. This paper conducts a systematic analysis of the visual syntax of i*, one of the leading goal-oriented languages. Like most RE notations, i* is highly visual. Yet surprisingly, there has been little debate about or modification to its graphical conventions since it was proposed more than a decade ago. We evaluate the i* visual notation using a set of principles for designing cognitively effective visual notations (the Physics of Notations). The analysis reveals some serious flaws in the notation together with some practical recommendations for improvement. The results can be used to improve its effectiveness in practice, particularly for communicating with end users. A broader goal of the paper is to raise awareness about the importance of visual representation in RE research, which has historically received little attention.
An incremental development of the Mondex system in Event-B A development of the Mondex system was undertaken using Event-B and its associated proof tools. An incremental approach was used whereby the refinement between the abstract specification of the system and its detailed design was verified through a series of refinements. The consequence of this incremental approach was that we achieved a very high degree of automatic proof. The essential features of our development are outlined. We also present some modelling and proof guidelines that we found helped us gain a deep understanding of the system and achieve the high degree of automatic proof.
Formal Methods Applied to a Floating-Point Number System A formalization of the IEEE standard for binary floating-point arithmetic (ANSI/IEEE Std. 754-1985) is presented in the set-theoretic specification language Z. The formal specification is refined into four sequential components, which unpack the operands, perform the arithmetic, and pack and round the result. This refinement follows proven rules and so demonstrates a mathematically rigorous method of program development. In the course of the proofs, useful internal representations of floating-point numbers are specified. The procedures presented form the basis for the floating-point unit of the Inmos IMS T800 transputer.
Refinement calculus, part I: sequential nondeterministic programs A lattice theoretic framework for the calculus of program refinement is presented. Specifications and program statements are combined into a single (infinitary) language of commands which permits miraculous, angelic and demonic statements to be used in the description of program behavior. The weakest precondition calculus is extended to cover this larger class of statements and a game-theoretic interpretation is given for these constructs. The language is complete, in the sense that every monotonic predicate transformer can be expressed in it. The usual program constructs can be defined as derived notions in this language. The notion of inverse statements is defined and its use in formalizing the notion of data refinement is shown.
Software requirements as negotiated win conditions Current processes and support systems for software requirements determination and analysis often neglect the critical needs of important classes of stakeholders, and limit themselves to the concerns of the developers, users and customers. These stakeholders can include maintainers, interfacers, testers, product line managers, and sometimes members of the general public. This paper describes the results to date in researching and prototyping a next-generation process model (NGPM) and support system (NGPSS) which directly addresses these issues. The NGPM emphasizes collaborative processes, involving all of the significant constituents with a stake in the software product. Its conceptual basis is a set of “theory W” (win-win) extensions to the spiral model of software development
Where Do Operations Come From? A Multiparadigm Specification Technique We propose a technique to help people organize and write complex specifications, exploiting the best features of several different specification languages. Z is supplemented, primarily with automata and grammars, to provide a rigorous and systematic mapping from input stimuli to convenient operations and arguments for the Z specification. Consistency analysis of the resulting specificaiton is based on the structural rules. The technique is illustrated by two examples, a graphical human-computer interface and a telecommunications system.
Knowledge Representation and Reasoning in the Design of Composite Systems The design process that spans the gap between the requirements acquisition process and the implementation process, in which the basic architecture of a system is defined, and functions are allocated to software, hardware, and human agents. is studied. The authors call this process composite system design. The goal is an interactive model of composite system design incorporating deficiency-driven design, formal analysis, incremental design and rationalization, and design reuse. They discuss knowledge representations and reasoning techniques that support these goals for the product (composite system) that they are designing, and for the design process. To evaluate the model, the authors report on its use to reconstruct the design of two existing composite systems rationally.
Supporting the negotiation life cycle This article describes processes, products, and perspectives of the negotiation life cycle andapplies this framework to show: (1) how different life cycle phases have different supportrequirements, and (2) how existing tools differ in their level of support for these various phases.We illustrate the use of the framework by showing how it can guide the selection of negotiationsupport tools for a specific negotiation context.
Tool support for requirements analysis Describes an approach to the provision of tool support for two particular aspects of requirements analysis: method support by active guidance, and specification interpretation and validation by animation. Method guidance is supported by a method model used to describe the sequence of method steps that should be followed. Animation provides an indication of the dynamic behaviour of the specified system by walking through a specification fragment to follow some scenario of interest. This approach to tool assistance has been tested by implementing a prototype set of tools for the CORE method and the Analyst workstation, and by application to a major case study. The current status of that work is described and evaluated
Normal forms in total correctness for while programs and action systems A classical while-program normal-form theorem is derived in demonic refinement algebra. In contrast to Kozen’s partial-correctness proof of the theorem in Kleene algebra with tests, the derivation in demonic refinement algebra provides a proof that the theorem holds in total correctness. A normal form for action systems is also discussed.
JAN - Java animation for program understanding JAN is a system for animated execution of Java programs. Its application area is program understanding rather than debugging. To this end, the animation can be customized, both by annotating the code with visualization directives and by interactively adapting the visual appearance to the user's personal taste. Object diagrams and sequence dia- grams are supported. Scalability is achieved by recogniz- ing object composition: object aggregates are displayed in a nested fashion and mechanisms for collapsing and ex- ploding aggregates are provided. JAN has been applied to itself, producing an animation of its visualization back- end.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.070808
0.072889
0.066667
0.036444
0.01306
0.001588
0.000181
0.000081
0.000039
0.00001
0
0
0
0
FOL-Based Approach for Improving Legal-GRL Modeling Framework: A Case for Requirements Engineering of Legal Regulations of Social Media Requirements engineers need to have a comprehensive requirements modeling framework for modeling legal requirements, particularly for privacy-related regulations, which are required for IT systems. The nature of law demands a special approach for dealing with the complexity of regulations. In this paper, we integrate different approaches for modeling legal requirements into one unified framework. We use semantic parameterization technique and first-order logic (FOL) approach for extracting legal requirements from legal documents. We then use Goal-oriented Requirements Language (GRL) to illustrate and evaluate the models. The aim of this paper is to improve and extend the existing Legal-GRL framework using semantic parameterization process and FOL. We use social media as the example to illustrate our approach.
Identifying and classifying ambiguity for regulatory requirements Software engineers build software systems in increasingly regulated environments, and must therefore ensure that software requirements accurately represent obligations described in laws and regulations. Prior research has shown that graduate-level software engineering students are not able to reliably determine whether software requirements meet or exceed their legal obligations and that professional software engineers are unable to accurately classify cross-references in legal texts. However, no research has determined whether software engineers are able to identify and classify important ambiguities in laws and regulations. Ambiguities in legal texts can make the difference between requirements compliance and non-compliance. Herein, we develop a ambiguity taxonomy based on software engineering, legal, and linguistic understandings of ambiguity. We examine how 17 technologists and policy analysts in a graduate-level course use this taxonomy to identify ambiguity in a legal text. We also examine the types of ambiguities they found and whether they believe those ambiguities should prevent software engineers from implementing software that complies with the legal text. Our research suggests that ambiguity is prevalent in legal texts. In 50 minutes of examination, participants in our case study identified on average 33.47 ambiguities in 104 lines of legal text using our ambiguity taxonomy as a guideline. Our analysis suggests (a) that participants used the taxonomy as intended: as a guide and (b) that the taxonomy provides adequate coverage (97.5%) of the ambiguities found in the legal text.
Legal goal-oriented requirement language (legal GRL) for modeling regulations Every year, governments introduce new or revised regulations that are imposing new types of requirements on software development. Analyzing and modeling these legal requirements is time consuming, challenging and cumbersome for software and requirements engineers. Having regulation models can help understand regulations and converge toward better compliance levels for software and systems. This paper introduces a systematic method to extract legal requirements from regulations by mapping the latter to the Legal Profile for Goal-oriented Requirements Language (GRL) (Legal GRL). This profile provides a conceptual meta-model for the anatomy of regulations and maps its elements to standard GRL with specialized annotations and links, with analysis techniques that exploit this additional information. The paper also illustrates examples of Legal GRL models for The Privacy and Electronic Communications Regulations. Existing tool support (jUCMNav) is also extended to support Legal GRL modeling.
Analyzing Goal Semantics for Rights, Permissions, and Obligations Software requirements, rights, permissions, obligations, and operations of policy enforcing systems are often misaligned. Our goal is to develop tools and techniques that help requirements engineers and policy makers bring policies and system requirements into better alignment. Goals from requirements engineering are useful for distilling natural language policy statements into structured descriptions of these interactions; however, they are limited in that they are not easy to compare with one another despite sharing common semantic features. In this paper, we describe a process called semantic parameterization that we use to derive semantic models from goals mined from privacy policy documents. We present example semantic models that enable comparing policy statements and present a template method for generating natural language policy statements (and ultimately requirements) from unique semantic models. The semantic models are described by a context-free grammar called KTL that has been validated within the context of the most frequently expressed goals in over 100 Internet privacy policy documents. KTL is supported by a policy analysis tool that supports queries and policy statement generation.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section.
Modelling information flow for organisations: A review of approaches and future challenges. Modelling is a classic approach to understanding complex problems that can be achieved diagrammatically to visualise concepts, and mathematically to analyse attributes of concepts. An organisation as a communicating entity is a made up of constructs in which people can have access to information and speak to each other. Modelling information flow for organisations is a challenging task that enables analysts and managers to better understand how to: organise and coordinate processes, eliminate redundant information flows and processes, minimise the duplication of information and manage the sharing of intra- and inter-organisational information.
From Action Systems to Modular Systems Action systems are used to extend program refinement methods for sequential programs, as described in the refinement calculus, to parallel and reactive system refinement. They provide a general description of reactive systems, capable of modeling terminating, possibly aborting and infinitely repeating systems. We show how to extend the action system model to refinement of modular systems. A module may export and import variables, it may provide access procedures for other modules, and it may itself access procedures of other modules. Modules may have autonomous internal activity and may execute in parallel or in sequence. Modules may be nested within each other. They may communicate by shared variables, shared actions, a generalized form of remote procedure calls and by persistent data structures. Both synchronous and asynchronous communication between modules is supported. The paper shows how a single framework can be used for both the specification of large systems, the modular decomposition of the system into smaller units and the refinement of the modules into program modules that can be described in a standard programming language and executed on standard hardware.
Inheritance of proofs The Curry-Howard isomorphism, a fundamental property shared by many type theories, establishes a direct correspondence between programs and proofs. This suggests that the same structuring principles that ease programming should be useful for proving as well. To exploit object-oriented structuring mechanisms for verification, we extend the object-model of Pierce and Turner, based on the higher-order typed X-calculus F less than or equal to(omega), with a logical component. By enriching the (functional) signature of objects with a specification, methods and their correctness proofs are packed together in objects. The uniform treatment of methods and proofs gives rise in a natural way to object-oriented proving principles - including inheritance of proofs, late binding of proofs, and encapsulation of proofs - as analogues to object-oriented programming principles. We have used Lego, a type-theoretic proof checker, to explore the feasibility of this approach. (C) 1998 John Wiley & Sons, Inc.
DOODLE: a visual language for object-oriented databases In this paper we introduce DOODLE, a new visual and declarative language for object-oriented databases. The main principle behind the language is that it is possible to display and query the database with arbitrary pictures. We allow the user to tailor the display of the data to suit the application at hand or her preferences. We want the user-defined visualizations to be stored in the database, and the language to express all kinds of visual manipulations. For extendibility reasons, the language is object-oriented. The semantics of the language is given by a well-known deductive query language for object-oriented databases. We hope that the formal basis of our language will contribute to the theoretical study of database visualizations and visual query languages, a subject that we believe is of great interest, but largely left unexplored.
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.2
0.1
0.066667
0.003333
0
0
0
0
0
0
0
0
0
0
Reducing Requirement Perception Gaps through Coordination Mechanisms in Software Development Team.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An assessment of multilisp: lessons from experience Multilisp is a parallel programming language derived from the Scheme dialect of Lisp by addition of thefuture construct. It has been implemented on Concert, a 32-processor shared-memory multiprocessor. A statistics-gathering feature of Concert Multilisp producesparallelism profiles showing the number of processors busy with computing or overhead, as a function of time. Experience gained using parallelism profiles and other measurement tools on several application programs has revealed three basic ways in whichfuture generates concurrency. These ways are illustrated on two example programs: the Lisp mapping functionmapcar and the partitioning routine from Quicksort. Experience with Multilisp programming exposes issues relating to side effects, error and exception handling, low-level operations for explicit manipulation of futures and tasks, and speculative computing, which are also discussed. The basic outlines of Multilisp are now fairly clear and have stood the test of being used for several applications, but further language design work is especially needed in the areas of speculative computing and exception handling.
Further comments on the premature loop exit problem
A bidirectional data driven Lisp engine for the direct execution of Lisp in parallel
Multiprocessing extensions in Spur Lisp The authors describe their multiprocessing extensions to Common Lisp. They have added a few simple, expressive features on which one can build high-level constructs. These consist of a multithreading mechanism, primitives for communication and synchronization (mailboxes and signals), and a feature called futures. A few examples clarify how the primitives work and demonstrate their expressiveness. When Spur Lisp is ported to and optimized on the Spur workstation (a shared memory multiprocessor), programmers can use it to make symbolic programs parallel.<>
An architecture for mostly functional languages
The incremental garbage collection of processes This paper investigates some problems associated with an argument evaluation order that we call “future” order, which is different from both call-by-name and call-by-value, In call-by-future, each formal parameter of a function is bound to a separate process (called a “future”) dedicated to the evaluation of the corresponding argument. This mechanism allows the fully parallel evaluation of arguments to a function, and has been shown to augment the expressive power of a language. We discuss an approach to a problem that arises in this context: futures which were thought to be relevant when they were created become irrelevant through being ignored in the body of the expression where they were bound. The problem of irrelevant processes also appears in multiprocessing problem-solving systems which start several processors working on the same problem but with different methods, and return with the solution which finishes first. This parallel method strategy has the drawback that the processes which are investigating the losing methods must be identified, stopped, and re-assigned to more useful tasks.
The architecture of a Linda coprocessor We describe the architecture of a coprocessor that supports the communication primitives of the Linda parallel programming environment in hardware. The coprocessor is a critical element in the architecture of the Linda Machine, an MIMD parallel processing system that is designed top down from the specifications of Linda. Communication in Linda programs takes place through a logically shared associative memory mechanism called tuple space. The Linda Machine, however, has no physically shared memory. The microprogrammable coprocessor implements distributed protocols for executing tuple space operations over the Linda Machine communication network. The coprocessor has been designed and is in the process of fabrication. We discuss the projected performance of the coprocessor and compare it with software Linda implementations. This work is supported in part by National Science Foundation grants CCR-8657615 and ONR N00014-86-K-0310.
Stepwise Refinement of Distributed Systems, Models, Formalisms, Correctness, REX Workshop, Mook, The Netherlands, May 29 - June 2, 1989, Proceedings
Optimal, efficient, recursive edge detection filters The design of an optimal, efficient, infinite-impulse-response (IIR) edge detection filter is described. J. Canny (1986) approached the problem by formulating three criteria designed in any edge detection filter: good detection, good localization, and low spurious response. He maximized the product of the first two criteria while keeping the spurious response criterion constant. Using the variational approach, he derived a set of finite extent step edge detection filters corresponding to various values of the spurious response criterion, approximating the filters by the first derivative of a Gaussian. A more direct approach is described in this paper. The three criteria are formulated as appropriate for a filter of infinite impulse response, and the calculus of variations is used to optimize the composite criteria. Although the filter derived is also well approximated by first derivative of a Gaussian, a superior recursively implemented approximation is achieved directly. The approximating filter is separable into two linear filters operating in two orthogonal directions allowing for parallel edge detection processing. The implementation is very simple and computationally efficient
Multiview—an exploration in information systems development
Provably Correct Systems . The goal of the Provably Correct Systems project (ProCoS)is to develop a mathematical basis for development of embedded, realtime,computer systems. This survey paper introduces the specificationlanguages and verification techniques for four levels of development: Requirementsdefinition and control design; Transformation to a systemsarchitecture with program designs and their transformation to programs;Compilation of real-time programs to conventional processors, and Compilationof...
Navigating hierarchically clustered networks through fisheye and full-zoom methods Many information structures are represented as two-dimensional networks (connected graphs) of links and nodes. Because these network tend to be large and quite complex, people often perfer to view part or all of the network at varying levels of detail. Hierarchical clustering provides a framework for viewing the network at different levels of detail by superimposing a hierarchy on it. Nodes are grouped into clusters, and clusters are themselves place into other clusters. Users can then navigate these clusters until an appropiate level of detail is reached. This article describes an experiment comparing two methods for viewing hierarchically clustered networks. Traditional full-zoom techniques provide details of only the current level of the hierarchy. In contrast, fisheye views, generated by the “variable-zoom” algorithm described in this article, provide information about higher levels as well. Subjects using both viewing methods were given problem-solving tasks requiring them to navigate a network, in this case, a simulated telephone system, and to reroute links in it. Results suggest that the greater context provided by fisheye views significantly improved user performance. Users were quicker to complete their task and made fewer unnecessary navigational steps through the hierarchy. This validation of fisheye views in important for designers of interfaces to complicated monitoring systems, such as control rooms for supervisory control and data acquistion systems, where efficient human performance is often critical. However, control room operators remained concerned about the size and visibility tradeoffs between the fine room operators remained concerned about the size and visibility tradeoffs between the fine detail provided by full-zoom techniques and the global context supplied by fisheye views. Specific interface feaures are required to reconcile the differences.
Notes on Nonrepetitive Graph Colouring. A vertex colouring of a graph is nonrepetitive on paths if there is no path upsilon(1), upsilon(2),...., upsilon(2t) such that upsilon(i) and upsilon(t+i) receive the same colour for all i = 1, 2,..., t. We determine the maximum density of a graph that admits a k-colouring that is nonrepetitive on paths. We prove that every graph has a subdivision that admits a 4-colouring that is nonrepetitive on paths. The best previous bound was 5. We also study colourings that are nonrepetitive on walks, and provide a conjecture that would imply that every graph with maximum degree Delta has a f (Delta)-colouring that is nonrepetitive on walks. We prove that every graph with treewidth k and maximum degree Delta has a O(k Delta)-colouring that is nonrepetitive on paths, and a O(k Delta(3))-colouring that is nonrepetitive on walks.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.086022
0.112043
0.112043
0.090121
0.032403
0.014521
0.000686
0.000029
0
0
0
0
0
0
Grids: A new program structuring mechanism based on layered graphs
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Handling Obstacles in Goal-Oriented Requirements Engineering Requirements engineering is concerned with the elicitation of high-level goals to be achieved by the envisioned system, the refinement of such goals and their operationalization into specifications of services and constraints and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. Requirements engineering processes often result in goals, requirements, and assumptions about agent behavior that are too ideal; some of them are likely not to be satisfied from time to time in the running system due to unexpected agent behavior. The lack of anticipation of exceptional behaviors results in unrealistic, unachievable, and/or incomplete requirements. As a consequence, the software developed from those requirements will not be robust enough and will inevitably result in poor performance or failures, sometimes with critical consequences on the environment. This paper presents formal techniques for reasoning about obstacles to the satisfaction of goals, requirements, and assumptions elaborated in the requirements engineering process. A first set of techniques allows obstacles to be generated systematically from goal formulations and domain properties. A second set of techniques allows resolutions to be generated once the obstacles have been identified thereby. Our techniques are based on a temporal logic formalization of goals and domain properties; they are integrated into an existing method for goal-oriented requirements elaboration with the aim of deriving more realistic, complete, and robust requirements specifications. A key principle in this paper is to handle exceptions at requirements engineering time and at the goal level, so that more freedom is left for resolving them in a satisfactory way. The various techniques proposed are illustrated and assessed in the context of a real safety-critical system.
Monitoring software requirements using instrumented code Ideally, software is derived from requirements whose properties have been established as good. However, it is difficult to define and analyze requirements. Moreover derivation of software from requirements is error prone. Finally, the installation and use of compiled software can introduce errors. Thus, it can be difficult to provide assurances about the state of a software's execution. We present a framework to monitor the requirements of software as it executes. The framework is general, and allows for automated support. The current implementation uses a combination of assertion and model checking to inform the monitor. We focus on two issues: (1) the expression of "suspect requirements", and (2) the transparency of the software and its environment to the monitor. We illustrate these issues with the widely known problems of the Dining Philosophers and the CCITT X.509 authentication. Each are represented as Java programs which are then instrumented and monitored.
ScenIC: A Strategy for Inquiry-Driven Requirements Determination ScenIC is a requirements engineering method for evolving systems. Derived from the Inquiry Cycle model of requirements refinement, it uses goal refinement and scenario analysis as its primary methodological strategies. ScenIC rests on an analogy with human memory: semantic memory consists of generalizations about system properties; episodic memory consists of specific episodes and scenarios; and working memory consists of reminders about incomplete refinements. Method-specific reminders and resolution guidelines are activated by the state of episodic or semantic memory. The paper presents a summary of the ScenIC strategy and guidelines.
Personal and Contextual Requirements Engineering A framework for requirements analysis is proposed that accounts for individual and personal goals and the effect of time and context on personal requirements. The implications of the framework on system architecture are considered as three implementation pathways: functional specifications, development of customisable features and automatic adaptation by the system. These pathways imply the need to analyse system architecture requirements. Different implementation pathways have cost-benefit implications for stakeholders, so cost-benefit analysis techniques are proposed to assess trade-offs between goals and implementation strategies. The use of the framework is illustrated with two case studies in assistive technology domains: e-mail and a personalised navigation system.
Validating Requirements for Fault Tolerant Systems using Model Checking Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system. One was an error in the detailed requirements, and the other two were missing/ ambiguous requirements. Because the method allows validation of partial specifications, it is also an effective approach for maintaining fidelity between a co-evolving specification and an implementation.
A Blackboard-Based Cooperative System for Schema Integration We describe a four-level blackboard architecture that supports schema integration and provide a detailed description of the communication among human and computational agents that this system allows.Today's corporate information system environments are heterogeneous, consisting of multiple and independently managed databases. Many applications that assist decision making call for access to data from multiple heterogeneous databases. To facilitate this, there needs to be an integrated representation of the underlying databases that allows users to query multiple databases simultaneously. The process of deriving this integrated representation is called schema integration.Schema integration is time consuming and complex, as it requires a thorough understanding of the underlying database semantics. Since no data model can capture the entire real world semantics of each database's objects, this process requires human agent assistance. Although certain aspects of schema integration can be automated, interaction with designers and users is still necessary. In this article, we describe how blackboard architectures can facilitate the communication among human and computational agents for schema integration.
Inferring Declarative Requirements Specifications from Operational Scenarios Scenarios are increasingly recognized as an effective means for eliciting, validating, and documenting software requirements. This paper concentrates on the use of scenarios for requirements elicitation and explores the process of inferring formal specifications of goals and requirements from scenario descriptions. Scenarios are considered here as typical examples of system usage; they are provided in terms of sequences of interaction steps between the intended software and its environment. Such scenarios are in general partial, procedural, and leave required properties about the intended system implicit. In the end such properties need to be stated in explicit, declarative terms for consistency/completeness analysis to be carried out.A formal method is proposed for supporting the process of inferring specifications of system goals and requirements inductively from interaction scenarios provided by stakeholders. The method is based on a learning algorithm that takes scenarios as examples/counterexamples and generates a set of goal specifications in temporal logic that covers all positive scenarios while excluding all negative ones.The output language in which goals and requirements are specified is the KAOS goal-based specification language. The paper also discusses how the scenario-based inference of goal specifications is integrated in the KAOS methodology for goal-based requirements engineering. In particular, the benefits of inferring declarative specifications of goals from operational scenarios are demonstrated by examples of formal analysis at the goal level, including conflict analysis, obstacle analysis, the inference of higher-level goals, and the derivation of alternative scenarios that better achieve the underlying goals.
Requirements Dynamics in Large Software Projects: A Perspective on New Directions in Software Engineering Process
Using the WinWin Spiral Model: A Case Study At the 1996 and 1997 International Conferences on Software Engineering, three of the six keynote addresses identified negotiation techniques as the most critical success factor in improving the outcome of software projects. The USC Center for Software Engineering has been developing a negotiation-based approach to software system requirements engineering, architecture, development, and management. This approach has three primary elements: Theory W, a management theory and approach, which says that making winners of the system's key stakeholders is a necessary and sufficient condition for project success. The WinWin spiral model, which extends the spiral software development model by adding Theory W activities to the front of each cycle. WinWin, a groupware tool that makes it easier for distributed stakeholders to negotiate mutually satisfactory (win-win) system specifications. This article describes an experimental validation of this approach, focusing on the application of the WinWin spiral model. The case study involved extending USC's Integrated Library System to access multimedia archives, including films, maps, and videos. The study showed that the WinWin spiral model is a good match for multimedia applications and is likely to be useful for other applications with similar characteristics--rapidly moving technology, many candidate approaches, little user or developer experience with similar systems, and the need for rapid completion.
Generalization/Specialization as a Basis for Software Specification
Capture, integration, and analysis of digital system requirements with conceptual graphs Initial requirements for new digital systems and products that are generally expressed in a variety of notations including diagrams and natural language can be automatically translated to a common knowledge representation for integration, for consistency and completeness analysis, and for further automatic synthesis. In this paper, block diagrams, flowcharts, timing diagrams, and English as used in specifying digital systems requirements are considered as examples of source notations for system requirements. The knowledge representation selected for this work is a form of semantic networks called conceptual graphs. For each source notation, a basis set of semantic primitives in terms of conceptual graphs is given, together with an algorithm for automatically generating conceptual structures from the notation. The automatic generation of conceptual structures from English presumes a restricted sublanguage of English and feedback to the author for verification of the interpretation. Mechanisms for integrating the separate conceptual structures generated from individual requirements expressions using schemata are discussed, and methods are illustrated for consistency and completeness analysis.
A taxonomy for real-world modelling concepts A major component in problem analysis is to model the real world itself. However, the modelling languages suggested so far, suffer from several weaknesses, especially with respect to dynamics . First, dynamic modelling languages originally aimed at describing data—rather than real-world—processes. Moreover, they are either weak in expression, so that models become too vague to be meaningful, or they are cluttered with rigorous detail, which makes modelling unnecessarily complicated and inhibits the communication with end users. This paper establishes a simple and intuitive conceptual basis for the modelling of the real world, with an emphasis on dynamics. Object-orientation is not considered appropriate for this purpose, due to its focus on static object structure. Dataflow diagrams, on the other hand, emphasize dynamics, but unfortunately, some major conceptual deficiencies make DFDs, as well as their various formal extensions, unsuited for real-world modelling. This paper presents a taxonomy of concepts for real-world modelling which rely on some seemingly small, but essential modifications of the DFD language, Hence the well-known, communication-oriented diagrammatic representations of DFDs can be retained. It is indicated how the approach can support a smooth transition into later stages of object-oriented design and implementation.
Employing the Intelligent Interface for Scientific Discovery
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.009737
0.013437
0.012022
0.01039
0.00765
0.00765
0.006195
0.003826
0.001943
0.000377
0.000001
0
0
0
Orca: a language for distributed programming ABSTRACT We present a simple model of shared data-objects, which extends the abstract data type model,to support distributed programming. Our model essentially provides shared address space semantics, rather than message passing semantics, without requiring physical shared memory,to be present in the target system. We also propose a new programming language, Orca, based on shared data-objects. A compiler and three different run time systems for Orca exist, which have been in use for over a year now.
MASA: a multithreaded processor architecture for parallel symbolic computing MASA is a “first cut” at a processor architecture intended as a building block for a multiprocessor that can execute parallel Lisp programs efficiently. MASA features a tagged architecture, multiple contexts, fast trap handling, and a synchronization bit in every memory word. MASA's principal novelty is its use of multiple contexts both to support multithreaded execution—interleaved execution from separate instruction streams—and to speed up procedure calls and trap handling in the same manner as register windows. A project is under way to evaluate MASA-like architectures for executing programs written in Multilisp.
Parallel Symbolic Computing First Page of the Article
Matching language and hardware for parallel computation in the Linda Machine The Linda Machine is a parallel computer that has been designed to support the Linda parallel programming environment in hardware. Programs in Linda communicate through a logically shared associative memory called tuple space. The goal of the Linda Machine project is to implement Linda's high-level shared-memory abstraction efficiently on a nonshared-memory architecture. The authors describe the machine's special-purpose communication network and its associated protocols, the design of the Linda coprocessor, and the way its interaction with the network supports global access to tuple space. The Linda Machine is in the process of fabrication. The authors discuss the machine's projected performance and compare this to software versions of Linda.
Stepwise Refinement of Action Systems A method for the formal development of provably correct parallel algorithms by stepwise refinement is presented. The entire derivation procedure is carried out in the context of purely sequential programs. The resulting parallel algorithms can be efficiently executed on different architectures. The methodology is illustrated by showing the main derivation steps in a construction of a parallel algorithm for matrix multiplication.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research.
Optimal, efficient, recursive edge detection filters The design of an optimal, efficient, infinite-impulse-response (IIR) edge detection filter is described. J. Canny (1986) approached the problem by formulating three criteria designed in any edge detection filter: good detection, good localization, and low spurious response. He maximized the product of the first two criteria while keeping the spurious response criterion constant. Using the variational approach, he derived a set of finite extent step edge detection filters corresponding to various values of the spurious response criterion, approximating the filters by the first derivative of a Gaussian. A more direct approach is described in this paper. The three criteria are formulated as appropriate for a filter of infinite impulse response, and the calculus of variations is used to optimize the composite criteria. Although the filter derived is also well approximated by first derivative of a Gaussian, a superior recursively implemented approximation is achieved directly. The approximating filter is separable into two linear filters operating in two orthogonal directions allowing for parallel edge detection processing. The implementation is very simple and computationally efficient
Combining angels, demons and miracles in program specifications The complete lattice of monotonic predicate transformers is interpreted as a command language with a weakest precondition semantics. This command lattice contains Dijkstra's guarded commands as well as miracles. It also permits unbounded nondeterminism and angelic nondeterminism. The language is divided into sublanguages using criteria of demonic and angelic nondeterminism, termination and absence of miracles. We investigate dualities between the sublanguages and how they can be generated from simple primitive commands. The notions of total correctness and refinement are generalized to the command lattice.
Beyond models and metaphors: visual formalisms in user interface design The user interface has both syntactic functions-supplying commands and arguments to programs-and semantic functions-visually presenting application semantics and supporting problem solving cognition. The authors argue that though both functions are important, it is time to devote more resources to the problems of the semantic interface. Complex problem solving activities, e.g. for design and analysis tasks, benefit from clear visualizations of application semantics in the user interface. Designing the semantic interface requires computational building blocks capable of representing and visually presenting application semantics in a clear, precise way. The authors argue that neither mental models not metaphors provide a basis for designing and implementing such building blocks, but that visual formalisms do. They compare the benefits of mental models, metaphors and visual formalisms as the basis for designing the user interface, with particular attention to the practical solutions each provides to application developers
Foundations of 4Thought 4Thought, a prototype design tool, is based on the notion that design artifacts are complex, formal, mathematical objects that require complementary textual and graphical views to be adequately comprehended. This paper describes the combined use of Entity- Relationship modelling and GraphLog to bridge the textual and graphical views. These techniques are illustrated by an example that is formally specified in Z Notation.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.2
0.04
0.022222
0.0025
0
0
0
0
0
0
0
0
0
A Theory of Generalised Substitutions We augment the usual wp semantics of substitutions with an explicit notion of frame, which allows us to develop a simple selfcontained theory of generalised substitutions outside their usual context of the B Method. We formulate three fundamental healthiness conditions which semantically characterise all substitutions, and from which we are able to derive directly, without need of any explicit further appeal to syntax, a number of familiar properties of substitutions, as well as several new ones specifically concerning frames. In doing so we gain some useful insights about the nature of substitutions, which enables us to resolve some hitherto problematic issues concerning substitutions within the B Method.
A comparison of refinement orderings and their associated simulation rules In this paper we compare the refinement orderings, and their associated simulation rules, of state-based specification languages such as Z and Object-Z with the refinement orderings of event-based specification languages such as CSP. We prove with a simple counter-example that data refinement, established using the standard simulation rules for Z, does not imply failures refinement in CSP. This contradicts accepted results.
Interpreting the B-Method in the Refinement Calculus In this paper, we study the B-Method in the light of the theory of refinement calculus. It allows us to explain the proof obligations for a refinement component in terms of standard data refinement. A second result is an improvement of the architectural condition of [PR98], ensuring global correctness of a B software system using the sees primitive.
Layering Distributed Algorithms within the B-Method Superposition is a powerful program modularization and structuring method for developing parallel and distributed systems by adding new functionality to an algorithm while preserving the original computation. We present an important special case of the original superposition method, namely, that of considering each new functionality as a layer that is only allowed to read the variables of the previous layers. Thus, the superposition method with layers structures the presentation of the derivation. Each derivation step is, however, large and involves many complicated proof obligations. Tool support is important for getting confidence in these proofs and for administering the derivation steps. We have chosen the B-Method for this purpose. We propose how to extend the B-Method to make it more suitable for expressing the layers and assist in proving the corresponding superposition steps in a convenient way.
An Approach to the Design of Distributed Systems with B AMN In this paper, we describe an approach to the design of distributed systems with B AMN. The approach is based on the action-system formalism which provides a framework for developing state-based parallel reactive systems. More specifically, we use the so-called CSP approach to action systems in which interaction between subsystems is by synchronised message passing and there is no sharing of state. We show that the abstract machines of B may be regarded as action systems and show how reactive refinement and decomposition of action systems may be applied to abstract machines. The approach fits in closely with the stepwise refinement method of B.
Decentralization of process nets with centralized control The behavior of a net of interconnected, communicating processes is described in terms of the joint actions in which the processes can participate. A distinction is made between centralized and decentralized action systems. In the former, a central agent with complete information about the state of the system controls the execution of the actions; in the latter no such agent is needed. Properties of joint action systems are expressed in temporal logic. Centralized action systems allow for simple description of system behavior. Decentralized (two-process) action systems again can be mechanically compiled into a collection of CSP processes. A method for transforming centralized action systems into decentralized ones is described. The correctness of this method is proved, and its use is illustrated by deriving a process net that distributedly sorts successive lists of integers.
Appraising Fairness in Languages for Distributed Programming The relations among various languages and models for distributed computation and various possible definitions of fairness are considered. Natural semantic criteria are presented which an acceptable notion of fairness should satisfy. These are then used to demonstrate differences among the basic models, the added power of the fairness notion, and the sensitivity of the fairness notion to irrelevant semantic interleavings of independent operations. These results are used to show that from the considerable variety of commonly used possibilities, only strong process fairness is appropriate for CSP if these criteria are adopted. We also show that under these criteria, none of the commonly used notions of fairness are fully acceptable for a model with an n-way synchronization mechanism. The notion of fairness most often mentioned for Ada is shown to be fully acceptable. For a model with nonblocking send operations, some variants of common fairness definitions are appraised, and two are shown to satisfy the suggested criteria.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
The wire-tap channel We consider the situation in which digital data is to be reliably transmitted over a discrete, memoryless channel (dmc) that is subjected to a wire-tap at the receiver. We assume that the wire-tapper views the channel output via a second dmc). Encoding by the transmitter and decoding by the receiver are permitted. However, the code books used in these operations are assumed to be known by the wire-tapper. The designer attempts to build the encoder-decoder in such a way as to maximize the transmission rate R, and the equivocation d of the data as seen by the wire-tapper. In this paper, we find the trade-off curve between R and d, assuming essentially perfect (“error-free”) transmission. In particular, if d is equal to Hs, the entropy of the data source, then we consider that the transmission is accomplished in perfect secrecy. Our results imply that there exists a Cs > 0, such that reliable transmission at rates up to Cs is possible in approximately perfect secrecy.
Supporting scenario-based requirements engineering Scenarios have been advocated as a means of improving requirements engineering yet few methods or tools exist to support scenario-based RE. The paper reports a method and software assistant tool for scenario-based RE that integrates with use case approaches to object-oriented development. The method and operation of the tool are illustrated with a financial system case study. Scenarios are used to represent paths of possible behavior through a use case, and these are investigated to elaborate requirements. The method commences by acquisition and modeling of a use case. The use case is then compared with a library of abstract models that represent different application classes. Each model is associated with a set of generic requirements for its class, hence, by identifying the class(es) to which the use case belongs, generic requirements can be reused. Scenario paths are automatically generated from use cases, then exception types are applied to normal event sequences to suggest possible abnormal events resulting from human error. Generic requirements are also attached to exceptions to suggest possible ways of dealing with human error and other types of system failure. Scenarios are validated by rule-based frames which detect problematic event patterns. The tool suggests appropriate generic requirements to deal with the problems encountered. The paper concludes with a review of related work and a discussion of the prospects for scenario-based RE methods and tools.
Data Refinement of Remote Procedures Recently the action systems formalism for parallel and distributed systems has been extended with the procedure mechanism. This gives us a very general framework for describing different communication paradigms for action systems, e.g. remote procedure calls. Action systems come with a design methodology based on the refinement calculus. Data refinement is a powerful technique for refining action systems. In this paper we will develop a theory and proof rules for the refinement of action systems that communicate via remote procedures based on the data refinement approach. The proof rules we develop are compositional so that modular refinement of action systems is supported. As an example we will especially study the atomicity refinement of actions. This is an important refinement strategy, as it potentially increases the degree of parallelism in an action system.
Abstracto 84: The next generation Programming languages are not an ideal vehicle for expressing algorithms. This paper sketches how a language Abstracto might be developed for “algorithmic expressions” that may be manipulated by the rules of “algorithmics”, quite similar to the manipulation of mathematical expressions in mathematics. Two examples are given of “abstract” algorithmic expressions that are not executable in the ordinary sense, but may be used in the derivation of programs. It appears that the notion of “refinement” may be replaced by a weaker notion for abstract algorithmic expressions, corresponding also to a weaker notion of “weakest precondition”.
Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.071111
0.066667
0.033333
0.033333
0.004762
0.00037
0.000003
0
0
0
0
0
0
0
Efficient Mutation Killers in Action This paper presents the techniques and results of a novel model-based test case generation approach that automatically derives test cases from UML state machines. Mutation testing is applied on the modeling level to generate test cases. We present the test case generation approach, discuss the tool chain, and present the properties of the generated test cases. The main contribution of this paper is an empirical study of a car alarm system where different strategies for killing mutants are compared. We present detailed figures on the effectiveness of the test case generation technique. Although UML serves as an input language, all techniques are grounded on solid foundations: we give UML state transition diagrams a formal semantics by mapping them to Back's action systems.
Towards Symbolic Model-Based Mutation Testing: Pitfalls in Expressing Semantics as Constraints Model-based mutation testing uses altered models to generate test cases that are able to detect whether a certain fault has been implemented in the system under test. For this purpose, we need to check for conformance between the original and the mutated model. We have developed an approach for conformance checking of action systems using constraints. Action systems are well-suited to specify reactive systems and may involve non-determinism. Expressing their semantics as constraints for the purpose of conformance checking is not totally straight forward. This paper presents some pitfalls that hinder the way to a sound encoding of semantics into constraint satisfaction problems and gives solutions for each problem.
Towards Symbolic Model-Based Mutation Testing: Combining Reachability And Refinement Checking Model-based mutation testing uses altered test models to derive test cases that are able to reveal whether a modelled fault has been implemented. This requires conformance checking between the original and the mutated model. This paper presents an approach for symbolic conformance checking of action systems, which are well-suited to specify reactive systems. We also consider non-determinism in our models. Hence, we do not check for equivalence, but for refinement. We encode the transition relation as well as the conformance relation as a constraint satisfaction problem and use a constraint solver in our reachability and refinement checking algorithms. Explicit conformance checking techniques often face state space explosion. First experimental evaluations show that our approach has potential to outperform explicit conformance checkers.
Incremental Refinement Checking for Test Case Generation.
Automated Test Case Generation from Dynamic Models We have recently shown how use cases can be systematically transformed into UML state charts considering all relevant information from a use case specification, including pre- and postconditions. The resulting state charts can have transitions with conditions and actions, as well as nested states (sub and stub states). The current paper outlines how test suites with a given coverage level can be automatically generated from these state charts. We do so by mapping state chart elements to the STRIPS planning language. The application of the state of the art planning tool graphplan yields the different test cases as solutions to a planning problem. The test cases (sequences of messages plus test data) can be used for automated or manual software testing on system level.
Killing strategies for model-based mutation testing. This article presents the techniques and results of a novel model-based test case generation approach that automatically derives test cases from UML state machines. The main contribution of this article is the fully automated fault-based test case generation technique together with two empirical case studies derived from industrial use cases. Also, an in-depth evaluation of different fault-based test case generation strategies on each of the case studies is given and a comparison with plain random testing is conducted. The test case generation methodology supports a wide range of UML constructs and is grounded on the formal semantics of Back's action systems and the well-known input-output conformance relation. Mutation operators are employed on the level of the specification to insert faults and generate test cases that will reveal the faults inserted. The effectiveness of this approach is shown and it is discussed how to gain a more expressive test suite by combining cheap but undirected random test case generation with the more expensive but directed mutation-based technique. Finally, an extensive and critical discussion of the lessons learnt is given as well as a future outlook on the general usefulness and practicability of mutation-based test case generation. Copyright © 2014 John Wiley & Sons, Ltd.
Trace Refinement of Action Systems Action systems provide a general description of reactive sys- tems, capable of modeling terminating, aborting and infinitely repeating systems. Arbitrary sequential program statements can be used to de- scribe the behavior of atomic actions. Action systems are used to extend program refinement methods for sequential programs to parallel and re- active system refinement. We give here a behavioral semantics of action systems in terms of execution traces, and define refinement of action sys- tems in terms of this semantics. We give a simulation based proof rule for action system refinement in a reactive context, and illustrate the use of this rule with an example. The proof rule is complete under certain restrictions. An action system describes the behavior of a parallel system in terms of the atomic actions that can take place during the execution of the system. Action systems provide a general description of reactive systems, capable of model- ing systems that may or may not terminate and where atomic actions need not terminate themselves. Arbitrary sequential program statements can be used to describe an atomic action. The action system approach to parallel and dis- tributed systems was introduced by Back and Kurki-Suonio (5, 6), as a paradigm for describing parallel systems in a temporal logic framework. The same basic approach has later been used in other frameworks for distributed computing, notably UNITY (11) and TLA (14). The refinement calculus was originally described by Back (2) to provide a formal framework for stepwise refinement of sequential programs. It extends Di- jkstra's weakest precondition semantics (12) for total correctness of programs with a relation of refinement between program statements. This relation is de- fined in terms of the weakest preconditions of statements, and expresses the requirement that a refinement must preserve total correctness of the statement being refined. A lattice theoretic basic for the refinement calculus is described in (9). A good overview of how to apply the refinement calculus in practical program derivations is given by Morgan (15). By modeling parallel systems as action systems, which can be seen as special kinds of sequential systems, the refinement calculus framework can be extended to total correctness refinement of parallel systems (4, 7, 8). Reactive system refinement can be handled by existing techniques for data refinement of se- quential programs within the refinement calculus. The main extension needed is that silent or stuttering actions have to be considered explicitly. Data refinement
Software Requirements Analysis for Real-Time Process-Control Systems A set of criteria is defined to help find errors in, software requirements specifications. Only analysis criteria that examine the behavioral description of the computer are considered. The behavior of the software is described in terms of observable phenomena external to the software. Particular attention is focused on the properties of robustness and lack of ambiguity. The criteria are defined using an abstract state-machine model for generality. Using these criteria, analysis procedures can be defined for particular state-machine modeling languages to provide semantic analysis of real-time process-control software requirements.
A relational notation for state transition systems A relational notation for specifying state transition systems is presented. Several refinement relations between specifications are defined. To illustrate the concepts and methods, three specifications of the alternating-bit protocol are given. The theory is applied to explain auxiliary variables. Other applications of the theory to protocol verification, composition, and conversion are discussed. The approach is compared with previously published approaches.
Object-oriented development in an industrial environment Object-oriented programming is a promising approach to the industrialization of the software development process. However, it has not yet been incorporated in a development method for large systems. The approaches taken are merely extensions of well-known techniques when 'programming in the small' and do not stand on the firm experience of existing developments methods for large systems. One such technique called block design has been used within the telecommunication industry and relies on a similar paradigm as object-oriented programming. The two techniques together with a third technique, conceptual modeling used for requirement modeling of information systems, have been unified into a method for the development of large systems.
Financial Privacy Policies and the Need for Standardization By analyzing 40 online privacy policy documents from nine financial institutions, the authors examine the clarity and readability of these important privacy notices. Using goal-driven requirements engineering techniques and readability analysis, the findings show that compliance with the ex-isting legislation and standards is, at best, questionable.
Analyzing Regulatory Rules for Privacy and Security Requirements Information practices that use personal, financial and health-related information are governed by U.S. laws and regulations to prevent unauthorized use and disclosure. To ensure compliance under the law, the security and privacy requirements of relevant software systems must be properly aligned with these regulations. However, these regulations describe stakeholder rules, called rights and obligations, in complex and sometimes ambiguous legal language. These "rules" are often precursors to software requirements that must undergo considerable refinement and analysis before they are implementable. To support the software engineering effort to derive security requirements from regulations, we present a methodology to extract access rights and obligations directly from regulation texts. The methodology provides statement-level coverage for an entire regulatory document to consistently identify and infer six types of data access constraints, handle complex cross-references, resolve ambiguities, and assign required priorities between access rights and obligations to avoid unlawful information disclosures. We present results from applying this methodology to the entire regulation text of the U.S. Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule.
Processing Negation in NL Interfaces to Knowledge Bases This paper deals with Natural Language (NL) question-answering to knowledge bases (KB). It considers the usual conceptual graphs (CG) approach for NL semantic interpretation by joins of canonical graphs and compares it to the computational linguistics approach for NL question-answering basedon logical forms. After these theoretical considerations, the paper presents a system for querying a KB of CG in the domain of finances. It uses controlled English and processes large classes of negative questions. Internally the negation is interpreted as a replacement of the negatedt ype by its siblings from the type hierarchy. The answer is found by KB projection, generalized and presented in NL in a rather summarizedform, without a detaileden umeration of types. Thus the paper presents an interface for NL understanding and original techniques for application of CG operations (projection and generalization) as means for obtaining a more "natural" answer to the user's negative questions.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.052478
0.03259
0.02417
0.022232
0.00279
0.000872
0.00006
0.000008
0
0
0
0
0
0
A Probabilistic Scheme for Secure Estimation of Sensor Networks in the Presence of Packet Losses and Eavesdroppers This paper is concerned with a security mechanism applied to information filtering in sensor networks with eavesdroppers and random packet losses. The data transmitted in the communication channels between the sensors and the estimator can be heard by the eavesdroppers with a certain probability and is randomly dropped. A stochastic security mechanism that randomly keeps information on the sensors at appropriate rates is applied. An optimal probability based on solving an LMI is provided to guarantee that the estimation error of the legitimate user is stochastically bounded and the eavesdropper gets an enormous error. And an algorithm is also given to search the optimal probability. The effectiveness of this scheme for target tracking in wireless sensor networks is verified by a simulation example.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A proposed secure multiple watermarking technique based on DWT, DCT and SVD for application in medicine. In this paper, an algorithm for multiple watermarking based on discrete wavelet transforms (DWT), discrete cosine transform (DCT) and singular value decomposition (SVD) has been proposed for healthcare applications. For identity authentication purpose, the proposed method uses three watermarks in the form of medical Lump image watermark, the doctor signature/identification code and diagnostic information of the patient as the text watermarks. In order to improve the robustness performance of the image watermark, Back Propagation Neural Network (BPNN) is applied to the extracted image watermark to reduce the noise effects on the watermarked image. The security of the image watermark is also enhanced by using Arnold transform before embedding into the cover. Further, the symptom and signature text watermarks are also encoded by lossless arithmetic compression technique and Hamming error correction code respectively. The compressed and encoded text watermark is then embedded into the cover image. Experimental results are obtained by varying the gain factor, different sizes of text watermarks and the different cover image modalities. The results are provided to illustrate that the proposed method is able to withstand a different of signal processing attacks and has been found to be giving excellent performance for robustness, imperceptibility, capacity and security simultaneously. The robustness performance of the method is also compared with other reported techniques. Finally, the visual quality of the watermarked image is evaluated by the subjective method also. This shows that the visual quality of the watermarked images is acceptable for diagnosis at different gain factors. Therefore the proposed method may find potential application in prevention of patient identity theft in healthcare applications.
Digital Watermarking for Image AuthenticationBased on Combined DCT, DWT and SVD Transformation. This paper presents a hybrid digital image watermarking based on Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT) and Singular Value Decomposition (SVD) in a zigzag order. From DWT we choose the high band to embed the watermark that facilities to add more information, gives more invisibility and robustness against some attacks. Such as geometric attack. Zigzag method is applied to map DCT coefficients into four quadrants that represent low, mid and high bands. Finally, SVD is applied to each quadrant.
Robust and Secure Multiple Watermarking for Medical Images. This paper presents a robust and secure region of interest and non-region of interest based watermarking method for medical images. The proposed method applies the combination of discrete wavelet transform and discrete cosine transforms on the cover medical image for the embedding of image and electronic patient records (EPR) watermark simultaneously. The embedding of multiple watermarks at the same time provides extra level of security and important for the patient identity verification purpose. Further, security of the image and EPR watermarks is enhancing by using message-digest (MD5) hash algorithm and Rivest---Shamir---Adleman respectively before embedding into the medical cover image. In addition, Hamming error correction code is applying on the encrypted EPR watermark to enhance the robustness and reduce the possibility bit error rates which may result into wrong diagnosis in medical environments. The robustness of the method is also extensively examined for known attacks such as salt & pepper, Gaussian, speckle, JPEG compression, filtering, histogram equalization. The method is found to be robust for hidden watermark at acceptable quality of the watermarked image. Therefore, the hybrid method is suitable for avoidance of the patient identity theft/alteration/modification and secure medical document dissemination over the open channel for medical applications.
Image encryption using the two-dimensional logistic chaotic map Chaos maps and chaotic systems have been proved to be useful and effective for cryptography. In our study, the two-dimensional logistic map with complicated basin structures and attractors are first used for image encryption. The proposed method adopts the classic framework of the permutation-substitution network in cryptography and thus ensures both confusion and diffusion properties for a secure cipher. The proposed method is able to encrypt an intelligible image into a random-like one from the statistical point of view and the human visual system point of view. Extensive simulation results using test images from the USC-SIPI image database demonstrate the effectiveness and robustness of the proposed method. Security analysis results of using both the conventional and the most recent tests show that the encryption quality of the proposed method reaches or excels the current state-of-the-art methods. Similar encryption ideas can be applied to digital data in other formats (e.g., digital audio and video). We also publish the cipher MATLAB open-source-code under the web page https://sites.google.com/site/tuftsyuewu/source-code. (c) 2012 SPIE and IS&T. [DOI: 10.1117/1.JEI.21.1.013014]
Crypto-Watermarking of Transmitted Medical Images. Telemedicine is a booming healthcare practice that has facilitated the exchange of medical data and expertise between healthcare entities. However, the widespread use of telemedicine applications requires a secured scheme to guarantee confidentiality and verify authenticity and integrity of exchanged medical data. In this paper, we describe a region-based, crypto-watermarking algorithm capable of providing confidentiality, authenticity, and integrity for medical images of different modalities. The proposed algorithm provides authenticity by embedding robust watermarks in images’ region of non-interest using SVD in the DWT domain. Integrity is provided in two levels: strict integrity implemented by a cryptographic hash watermark, and content-based integrity implemented by a symmetric encryption-based tamper localization scheme. Confidentiality is achieved as a byproduct of hiding patient’s data in the image. Performance of the algorithm was evaluated with respect to imperceptibility, robustness, capacity, and tamper localization, using different medical images. The results showed the effectiveness of the algorithm in providing security for telemedicine applications.
Multiple watermarking technique for securing online social network contents using Back Propagation Neural Network. The initial contribution in this paper begins with proposing a robust and secure DWT, DCT and SVD based multiple watermarking techniques for protecting digital contents over unsecure social networks. The proposed technique initially decomposes the host image into third level DWT where the vertical frequency band (LH2) at second level and low frequency band (LL3) at the third level DWT is selected for embedding image and text watermark respectively. Further, the proposed method addresses the issue of ownership identity authentication, multiple watermarks are embedded instead of single watermark into the same multimedia objects simultaneously, which offer the extra level of security and reduced storage and bandwidth requirements in the important applications areas such as E-health, secure multimedia contents on online social network, secured E-Voting systems, digital cinema, education and insurance companies, driver’s license /passport. Moreover, the robustness image watermark is also enhanced by using Back Propagation Neural Network (BPNN), which is applied on extracted watermark to minimize the distortion effects on the watermarked image. In addition, the method addresses the issue of channel noise distortions in the identity information. This has been achieved using error correcting codes (ECCs) for encoding the text watermark before embedding into the host image. The effects of Hamming and BCH codes on the robustness of personal identity information in the form of text watermark and the cover image quality have been investigated. Further, to enhance the security of the host and watermarks the selective encryption is applied on watermarked image, where only the important multimedia data is encrypted. The proposed method has been extensively tested and analyzed against known attacks. Based on experimental results, it is established that the proposed technique achieves superior performance in respect of, robustness, security and capacity with acceptable visual quality of the watermarked image as compared to reported techniques. Finally, we have evaluated the image quality of the watermarked image by subjective method. Therefore, the proposed method may find potential solutions in prevention of personal identity theft and unauthorized multimedia content sharing on online social networks/open channel.
"Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents. The past four years have seen the rise of conversational agents (CAs) in everyday life. Apple, Microsoft, Amazon, Google and Facebook have all embedded proprietary CAs within their software and, increasingly, conversation is becoming a key mode of human-computer interaction. Whilst we have long been familiar with the notion of computers that speak, the investigative concern within HCI has been upon multimodality rather than dialogue alone, and there is no sense of how such interfaces are used in everyday life. This paper reports the findings of interviews with 14 users of CAs in an effort to understand the current interactional factors affecting everyday use. We find user expectations dramatically out of step with the operation of the systems, particularly in terms of known machine intelligence, system capability and goals. Using Norman's 'gulfs of execution and evaluation' [30] we consider the implications of these findings for the design of future systems.
Ant Algorithms for Discrete Optimization This article presents an overview of recent work on ant algorithms, that is, algorithms for discrete optimization that took inspiration from the observation of ant colonies' foraging behavior, and introduces the ant colony optimization (ACO) metaheuristic. In the first part of the article the basic biological findings on real ants are reviewed and their artificial counterparts as well as the ACO metaheuristic are defined. In the second part of the article a number of applications of ACO algorithms to combinatorial optimization and routing in communications networks are described. We conclude with a discussion of related work and of some of the most important aspects of the ACO metaheuristic.
Algebraic tools for the performance evaluation of discrete event systems In this paper, it is shown that a certain class of Petri nets called event graphs can be represented as linear "time-invariant" flnite-dimensional sys- tems using some particular algebras. This sets the ground on which a theory of these systems can be developped in a manner which is very analogous to that of conventional linear system theory. Part 2 of the paper is devoted to showing some preliminary basic developments in that direction. Indeed, there are several ways in which one can consider event graphs as linear sys- tems: these ways correspond to approaches in the time domain, in the event domain and in a two-dimensional domain. In each of these approaches, a difierent algebra has to be used for models to remain linear. However, the common feature of these algebras is that they all fall into the axiomatic deflnition of "dioids". Therefore, Part 1 of the paper is devoted to a unifled presentation of basic algebraic results on dioids.
Abstract interpretation of reactive systems The advent of ever more complex reactive systems in increasingly critical areas calls for the development of automated verication techniques. Model checking is one such technique, which has proven quite successful. However, the state-explosion problem remains a major stumbling block. Recent experience indicates that solutions are to be found in the application of techniques for property-preserving abstraction and successive approximation of models. Most such applications have so far been based solely on the property-preserving characteristics of simulation relations. A major drawback of all these results is that they do not oer a satisfactory formalization of the notion of precision of abstractions. The theory of Abstract Interpretation oers a framework for the denition and justication of property-preserving abstractions. Furthermore, it provides a method for the eective computation of abstract models directly from the text of a program, thereby avoiding the need for intermediate storage of a full-blown model. Finally, it formalizes the notion of optimality, while allowing to trade precision for speed by computing suboptimal approximations. For a long time, applications of Abstract Interpretation have mainly focused on the analysis of universal safety properties, i.e., properties that hold in all states along every possible execution path. In this article, we extend Abstract Interpretation to the analysis of both existential and universal reactive properties, as expressible in the modal -calculus .I t is shown how abstract models may be constructed by symbolic execution of programs. A notion of approximation between abstract models is dened while conditions are given under which optimal models can be constructed. Examples are given to illustrate this. We indicate conditions under which also falsehood of formulae is preserved. Finally, we compare our approach to those based on simulation relations.
A marriage of rely/guarantee and separation logic In the quest for tractable methods for reasoning about concurrent algorithms both rely/guarantee logic and separation logic have made great advances. They both seek to tame, or control, the complexity of concurrent interactions, but neither is the ultimate approach. Relyguarantee copes naturally with interference, but its specifications are complex because they describe the entire state. Conversely separation logic has difficulty dealing with interference, but its specifications are simpler because they describe only the relevant state that the program accesses. We propose a combined system which marries the two approaches. We can describe interference naturally (using a relation as in rely/guarantee), and where there is no interference, we can reason locally (as in separation logic). We demonstrate the advantages of the combined approach by verifying a lock-coupling list algorithm, which actually disposes/frees removed nodes.
Levelled Entity Relationship Model The Entity-Relationship formalism, introduced in the mid-seventies,is an extensively used tool for database design. The database communityis now involved in building the next generation of databasesystems. However, there is no effective formalism similar to ER formodeling the complex data in these systems. We propose the LeveledEntity Relationship (LER) formalism as a step towards fulfilling sucha need.An essential characteristic of these next-generation systems is thata data element is ...
Unifying Theories of Parallel Programming We are developing a shared-variable refinement calculus in the style of the sequential calculi of Back, Morgan, and Morris. As part of this work, we're studying different theories of shared-variable programming. Using the concepts and notations of Hoare & He's unifying theories of programming (UTP), we give a formal semantics to a programming language that contains sequential composition, conditional statements, while loops, nested parallel composition, and shared variables. We first give a UTP semantics to labelled action systems, and then use this to give the semantics of our programs. Labelled action systems have a unique normal form that allows a simple formalisation and validation of different logics for reasoning about shared-variable programs. In this paper, we demonstrate how this is done for Lamport's Concurrent Hoare Logic.
Reversible data hiding by adaptive group modification on histogram of prediction errors. In this work, the conventional histogram shifting (HS) based reversible data hiding (RDH) methods are first analyzed and discussed. Then, a novel HS based RDH method is put forward by using the proposed Adaptive Group Modification (AGM) on the histogram of prediction errors. Specifically, in the proposed AGM method, multiple bins are vacated based on their magnitudes and frequencies of occurrences by employing an adaptive strategy. The design goals are to maximize hiding elements while minimizing shifting and modification elements to maintain image high quality by giving priority to the histogram bins utilized for hiding. Furthermore, instead of hiding only one bit at a time, the payload is decomposed into segments and each segment is hidden by modifying a triplet of prediction errors to suppress distortion. Experimental results show that the proposed AGM technique outperforms the current state-of-the-art HS based RDH methods. As a representative result, the proposed method achieves an improvement of 4.30 dB in terms of PSNR when 105,000 bits are hidden into the test Lenna image.
1.106708
0.106708
0.106708
0.1025
0.1025
0.05125
0.000833
0.000102
0
0
0
0
0
0
Almost sure synchronization criteria of neutral-type neural networks with Lévy noise and sampled-data loss via event-triggered control. This paper addresses the synchronization problem for neutral-type neural networks with Lévy noise and sampled-data loss. An event-triggered control scheme is employed to overcome occasional sampled-data loss and solve the synchronization problem, which is a sampling controller with selection mechanism. Under the scheme, the sampled data is not transmitted to plant unless a predetermined threshold condition is violated. The Lyapunov method and linear matrix inequality technique are employed to analyze almost sure stability of synchronization error system. Finally, the numerical example shows the effectiveness of the derived results.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Image watermarking based on invariant regions of scale-space representation This paper proposes a novel content-based image watermarking method based on invariant regions of an image. The invariant regions are self-adaptive image patches that deform with geometric transformations. Three different invariant-region detection methods based on the scale-space representation of an image were considered for watermarking. At each invariant region, the watermark is embedded after geometric normalization according to the shape of the region. By binding watermarking with invariant regions, resilience against geometric transformations can be readily obtained. Experimental results show that the proposed method is robust against various image processing steps, including geometric transformations, cropping, filtering, and JPEG compression.
Still-image watermarking robust to local geometric distortions. Geometrical distortions are the Achilles heel for many watermarking schemes. Most countermeasures proposed in the literature only address the problem of global affine transforms (e.g., rotation, scaling, and translation). In this paper, we propose an original blind watermarking algorithm robust to local geometrical distortions such as the deformations induced by Stirmark. Our method consists in adding a predefined additional information to the useful message bits at the insertion step. These additional bits are labeled as resynchronization bits or reference bits and they are modulated in the same way as the information bits. During the extraction step, the reference bits are used as anchor points to estimate and compensate for small local and global geometrical distortions. The deformations are approximated using a modified basic optical flow algorithm.
Improved seam carving for video retargeting Video, like images, should support content aware resizing. We present video retargeting using an improved seam carving operator. Instead of removing 1D seams from 2D images we remove 2D seam manifolds from 3D space-time volumes. To achieve this we replace the dynamic programming method of seam carving with graph cuts that are suitable for 3D volumes. In the new formulation, a seam is given by a minimal cut in the graph and we show how to construct a graph such that the resulting cut is a valid seam. That is, the cut is monotonic and connected. In addition, we present a novel energy criterion that improves the visual quality of the retargeted images and videos. The original seam carving operator is focused on removing seams with the least amount of energy, ignoring energy that is introduced into the images and video by applying the operator. To counter this, the new criterion is looking forward in time - removing seams that introduce the least amount of energy into the retargeted result. We show how to encode the improved criterion into graph cuts (for images and video) as well as dynamic programming (for images). We apply our technique to images and videos and present results of various applications.
Data hiding for fighting piracy The problem of digital content piracy is becoming more and more critical, and major content producers are risking seeing their business being drastically reduced because of the ease by which digital contents can be copied and distributed. This is the reason why digital rights management (DRM) is currently garnering much attention from industry and research. Among the various technologies that can contribute to set up a reliable DRM system, data hiding (watermarking) has found an important place due to its potentiality of persistent attaching some additional information to the content itself. In this article, we analyzed the possible use of data hiding technology in DRM systems. The article also gives a brief survey of the main characteristics of the most common data hiding methods such as proof of ownership, watermark, copyright protection, infringement tracking, copy control, and item identification. The article also investigates the different approaches by highlighting critical points of each approach in particular from the point of view of hostile attacks.
Stochastic image warping for improved watermark desynchronization The use of digital watermarking in real applications is impeded by the weakness of current available algorithms against signal processing manipulations leading to the desynchronization of the watermark embedder and detector. For this reason, the problem of watermarking under geometric attacks has received considerable attention throughout recent years. Despite their importance, only few classes of geometric attacks are considered in the literature, most of which consist of global geometric attacks. The random bending attack contained in the Stirmark benchmark software is the most popular example of a local geometric transformation. In this paper, we introduce two new classes of local desynchronization attacks (DAs). The effectiveness of the new classes of DAs is evaluated from different perspectives including perceptual intrusiveness and desynchronization efficacy. This can be seen as an initial effort towards the characterization of the whole class of perceptually admissible DAs, a necessary step for the theoretical analysis of the ultimate performance reachable in the presence of watermark desynchronization and for the development of a new class of watermarking algorithms that can efficiently cope with them.
Circularly orthogonal moments for geometrically robust image watermarking Circularly orthogonal moments, such as Zernike moments (ZMs) and pseudo-Zernike moments (PZMs), have attracted attention due to their invariance properties. However, we find that for digital images, the invariance properties of some ZMs/PZMs are not perfectly valid. This is significant for applications of ZMs/PZMs. By distinguishing between the 'good' and 'bad' ZMs/PZMs in terms of their invariance properties, we design image watermarks with 'good' ZMs/PZMs to achieve watermark's robustness to geometric distortions, which has been considered a crucial and difficult issue in the research of digital watermarking. Simulation results show that the embedded information can be decoded at low error rates, robust against image rotation, scaling, flipping, as well as a variety of other common manipulations such as lossy compression, additive noise and lowpass filtering.
Evaluation of Interest Point Detectors Many different low-level feature detectors exist and it is widely agreed that the evaluation of detectors is important. In this paper we introduce two evaluation criteria for interest points' repeatability rate and information content. Repeatability rate evaluates the geometric stability under different transformations. Information content measures the distinctiveness of features. Different interest point detectors are compared using these two criteria. We determine which detector gives the best results and show that it satisfies the criteria well.
Reversible data hiding using additive prediction-error expansion Reversible data hiding is a technique that embeds secret data into cover media through an invertible process. In this paper, we propose a reversible data hiding scheme that can embed a large amount of secret data into image with imperceptible modifications. The prediction-error, difference between pixel value and its predicted value, is used to embed a bit '1' or '0' by expanding it additively or leaving it unchanged. Low distortion is guaranteed by limiting pixel change to 1 and averting possible pixel over/underflow; high pure capacity is achieved by adopting effective predictors to greatly exploit pixel correlation and avoiding large overhead like location map. Experimental results demonstrate that the proposed scheme provides competitive performances compared with other state-of-the-art schemes.
Time-delay systems: an overview of some recent advances and open problems After presenting some motivations for the study of time-delay system, this paper recalls modifications (models, stability, structure) arising from the presence of the delay phenomenon. A brief overview of some control approaches is then provided, the sliding mode and time-delay controls in particular. Lastly, some open problems are discussed: the constructive use of the delayed inputs, the digital implementation of distributed delays, the control via the delay, and the handling of information related to the delay value.
Handwritten alphanumeric character recognition by the neocognitron A pattern recognition system which works with the mechanism of the neocognitron, a neural network model for deformation-invariant visual pattern recognition, is discussed. The neocognition was developed by Fukushima (1980). The system has been trained to recognize 35 handwritten alphanumeric characters. The ability to recognize deformed characters correctly depends strongly on the choice of the training pattern set. Some techniques for selecting training patterns useful for deformation-invariant recognition of a large number of characters are suggested.
Where Do Operations Come From? A Multiparadigm Specification Technique We propose a technique to help people organize and write complex specifications, exploiting the best features of several different specification languages. Z is supplemented, primarily with automata and grammars, to provide a rigorous and systematic mapping from input stimuli to convenient operations and arguments for the Z specification. Consistency analysis of the resulting specificaiton is based on the structural rules. The technique is illustrated by two examples, a graphical human-computer interface and a telecommunications system.
Explanation-Based Scenario Generation for Reactive System Models Reactive systems control many useful and complex real-world devices. Tool-supported specification modeling helps software engineers design such systems correctly. One such tool, a scenario generator, constructs an input event sequence for the spec model that reaches a state satisfying given criteria. It can uncover counterexamples to desired safety properties, explain feature interactions in concrete terms to requirements analysts, and even provide online help to end users learning how to use a system. However, while exhaustive search algorithms such as model checkers work in limited cases, the problem is highly intractable for the functionally rich models that correspond naturally to complex systems engineers wish to design. This paper describes a novel heuristic approach to the problem that is applicable to a large class of infinite state reactive systems. The key idea is to piece together scenarios that achieve subgoals into a single scenario achieving the conjunction of the subgoals. The scenarios are mined from a library captured independently during requirements acquisition. Explanation-based generalization then abstracts them so they may be coinstantiated and interleaved. The approach is implemented, and I present the results of applying the tool to 63 scenario generation problems arising from a case study of telephony feature validation.
Enhancing Human Face Detection by Resampling Examples Through Manifolds As a large-scale database of hundreds of thousands of face images collected from the Internet and digital cameras becomes available, how to utilize it to train a well-performed face detector is a quite challenging problem. In this paper, we propose a method to resample a representative training set from a collected large-scale database to train a robust human face detector. First, in a high-dimensional space, we estimate geodesic distances between pairs of face samples/examples inside the collected face set by isometric feature mapping (Isomap) and then subsample the face set. After that, we embed the face set to a low-dimensional manifold space and obtain the low-dimensional embedding. Subsequently, in the embedding, we interweave the face set based on the weights computed by locally linear embedding (LLE). Furthermore, we resample nonfaces by Isomap and LLE likewise. Using the resulting face and nonface samples, we train an AdaBoost-based face detector and run it on a large database to collect false alarms. We then use the false detections to train a one-class support vector machine (SVM). Combining the AdaBoost and one-class SVM-based face detector, we obtain a stronger detector. The experimental results on the MIT + CMU frontal face test set demonstrated that the proposed method significantly outperforms the other state-of-the-art methods.
Voice in Virtual Worlds: The Design, Use, and Influence of Voice Chat in Online Play Communication is a critical aspect of any collaborative system. In online multiplayer games and virtual worlds it is especially complex. Users are present over long periods, require both synchronous and asynchronous communication, and may prefer to be pseudonymous or engage in identity-play while managing virtual and physical use contexts. Initially the only medium for player-to-player communication in virtual worlds was text, a medium well suited to identity-play and asynchronous communication, less so to fast-paced coordination and sociability among friends. During the past decade vendors have introduced facilities for gamers to communicate by voice. Yet little research has been conducted to help us understand the influence of voice on the experience of using virtual space: Where, when, and for whom voice is beneficial, and how it might be configured. To address this gap we examined a range of online gaming environments. We analyzed our observations in the light of theory from Human–Computer Interaction, Computer-Supported Cooperative Work, and Computer-Mediated Communication. We conclude that voice radically transforms the experience of online gaming, making virtual spaces more intensely social but removing some of the opportunity for identity play, multitasking, and multigaming while introducing ambiguity over what is being transmitted to whom.
1.029251
0.030181
0.029339
0.029339
0.029339
0.019661
0.008078
0.000019
0
0
0
0
0
0
Distributed snapshots: determining global states of distributed systems This paper presents an algorithm by which a process in a distributed system determines a global state of the system during a computation. Many problems in distributed systems can be cast in terms of the problem of detecting global states. For instance, the global state detection algorithm helps to solve an important class of problems: stable property detection. A stable property is one that persists: once a stable property becomes true it remains true thereafter. Examples of stable properties are “computation has terminated,” “ the system is deadlocked” and “all tokens in a token ring have disappeared.” The stable property detection problem is that of devising algorithms to detect a given stable property. Global state detection can also be used for checkpointing.
A Computational Model for Distributed Systems Using Operator Nets Without Abstract
Derivation of a distributed algorithm for finding paths in directed networks A distributed algorithm is developed that can be used to compute the topology of a network, given that each site starts with information about sites it is adjacent to, the network is strongly connected, and communication channels are uni-directional. The program is derived and proved correct using assertional reasoning.
Distributed Termination Discussed is a distributed system based on communication among disjoint processes, where each process is capable of achieving a post-condition of its local space in such a way that the conjunction of local post-conditions implies a global post-condition of the whole system. The system is then augmented with extra control communication in order to achieve distributed termination, without adding new channels of communication. The algorithm is applied to a problem of constructing a sorted partition.
An Effective Implementation for the Generalized Input-Output Construct of CSP
Stepwise design of real-time systems The joint action approach to modeling of reactive systems is presented and augmented with real time. This leads to a stepwise design method where temporal logic of actions can be used for formal reasoning, superposition is the key mechanism for transformations, the advantages of closed-system modularity are utilized, logical properties are addressed before real-time properties, and real-time properties are enforced without any specific assumptions on scheduling. As a result, real-time modeling is made possible already at early stages of specification, and increased insensitivity is achieved with respect to properties imposed by implementation environments.
Generalizing Action Systems to Hybrid Systems Action systems have been used successfully to describe discrete systems, i.e., systems with discrete control acting upon a discrete state space. In this paper we extend the action system approach to hybrid systems by defining continuous action systems. These are systems with discrete control over a continuously evolving state, whose semantics is defined in terms of traditional action systems. We show that continuous action systems are very general and can be used to describe a diverse range of hybrid systems. Moreover, the properties of continuous action systems are proved using standard action systems proof techniques.
Hierarchical correctness proofs for distributed algorithms This thesis introduces a new model for distributed computation in asynchronous networks, the input-output automaton. This simple, powerful model captures in a novel way the game-theoretical interaction between a system and its environment, and allows fundamental properties of distributed computation such as fair computation to be naturally expressed. Furthermore, this model can be used to construct modular, hierarchical correctness proofs of distributed algorithms. This thesis defines the input-output automaton model, and presents an interesting example of how this model can be used to construct such proofs.
An exploratory contingency model of user participation and MIS use A model is proposed of the relationship between user participation and degree of MIS usage. The model has four dimensions: participation characteristics, system characteristics, system initiator, and the system development environment. Stages of the System Development Life Cycle are considered as a participation characteristics, task complexity as a system characteristics, and top management support and user attitudes as parts of the system development environment. The data are from a cross-sectional survey in Korea, covering 134 users of 77 different information systems in 32 business firms. The results of the analysis support the proposed model in general. Several implications of this for MIS managers are then discussed.
From E-R to "A-R" - Modelling Strategic Actor Relationships for Business Process Reengineering
A Decompositional Approach to the Design of Efficient parallel Programs A methodology for the derivation of efficient parallel implementations from program specifications is developed. The goal of the methodology is to decompose a program specification into a collection of module specifications, such that each module may be implemented by a subprogram. The correctness of the whole program is then deduced from the correctness of the property refinement procedure and the correctness of the individual subprograms. The refinement strategy is based on identifying frequently occurring control structures such as sequential composition and iteration. The methodology is developed in the context of the UNITY logic and the UC programming language, and illustrated through the solution of diffusion aggregation in fluid flow simulations.
Incorporating usability into requirements engineering tools The development of a computer system requires the definition of a precise set of properties or constraints that the system must satisfy with maximum economy and efficiency. This definition process requires a significant amount of communication between the requestor and the developer of the system. In recent years, several methodologies and tools have been proposed to improve this communication process. This paper establishes a framework for examining the methodologies and techniques, charting the progress made, and identifying opportunities to improve the communication capabilities of a requirements engineering tool.
Thue-like Sequences and Rainbow Arithmetic Progressions A sequence u(1) = u(1)u(2)...u(n) is said to be nonrepetitive if no two adjacent blocks of u are exactly the same. For instance, the sequence abcbcba contains a repetition bcbc, while abcacbabcbac is nonrepetitive. A well known theorem of Thue asserts that there are arbitrarily long nonrepetitive sequences over the set {a, b, c}. This fact implies, via Konig's Infinity Lemma, the existence of an infinite ternary sequence without repetitions of any length. In this paper we consider a stronger property defined as follows. Let k >= 2 be a fixed integer and let C denote a set of colors (or symbols). A coloring f : N -> C of positive integers is said to be k-nonrepetitive if for every r >= 1 each segment of kr consecutive numbers contains a k-term rainbow arithmetic progression of difference r. In particular, among any k consecutive blocks of the sequence f = f (1) f (2) f (3)... no two are identical. By an application of the Lovasz Local Lemma we show that the minimum number of colors in a k-nonrepetitive coloring is at most 2(-1)e(k2(k-1)/(k-1)2) k(2)(k - 1) + 1. Clearly at least k + 1 colors are needed but whether O(k) suffices remains open. This and other types of nonrepetitiveness can be studied on other structures like graphs, lattices, Euclidean spaces, etc., as well. Unlike for the classical Thue sequences, in most of these situations non-constructive arguments seem to be unavoidable. A few of a range of open problems appearing in this area are presented at the end of the paper.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.017534
0.028235
0.019868
0.014147
0.01094
0.005702
0.002386
0.0003
0.000083
0.000035
0.000006
0
0
0
Lossless Microarray Image Compression using Region Based Predictors Microarray image technology is a powerful tool for monitoring the expression of thousands of genes simultaneously. Each microarray experiment produces large amount of image data, hence efficient compression routines that exploit microarray image structures are required. In this paper we introduce a lossless image compression method which segments the pixels of the image into three categories of background, foreground, and spot edges. The segmentation is performed by finding a threshold value which minimizes the weighted sum of the standard deviations of the foreground and background pixels. Each segment of the image is compressed using a separate predictor. The results of the implementation of the method show its superiority compared to the well-known microarray compression schemes as well as to the general lossless image compression standards.
A Review of DNA Microarray Image Compression We review the state of the art in DNA micro array image compression. First, we describe the most relevant approaches published in the literature and classify them according to the stage of the typical image compression process where each approach makes its contribution. We then summarize the compression results reported for these specific-specific image compression schemes. In a set of experiments conducted for this paper, we obtain results for several popular image coding techniques, including the most recent coding standards. Prediction-based schemes CALIC and JPEG-LS, and JPEG2000 using zero wavelet decomposition levels are the best performing standard compressors, but are all outperformed by the best micro array-specific technique, Battiato's CNN-based scheme.
The effect of microarray image compression on expression-based classification Current gene-expression microarrays carry enormous amounts of information. Compression is necessary for efficient distribution and storage. This paper examines JPEG2000 compression of cDNA microarray images and addresses the accuracy of classification and feature selection based on decompressed images. Among other options, we choose JPEG2000 because it is the latest international standard for image compression and offers lossy-to-lossless compression while achieving high lossless compression ratios on microarray images. The performance of JPEG2000 has been tested on three real data sets at different compression ratios, ranging from lossless to 45:1. The effects of JPEG2000 compression/decompression on differential expression detection and phenotype classification have been examined. There is less than a 4% change in differential detection at compression rates as high as 20:1, with detection accuracy suffering less than 2% for moderate to high intensity genes, and there is no significant effect on classification at rates as high as 35:1. The supplementary material is available at http://gsp.tamu.edu/web2/Compression.
On denoising and compression of DNA microarray images The annotation of proteins can be achieved by classifying the protein of interest into a certain known protein family to induce its functional and structural features. This paper presents a new method for classifying protein sequences based upon the ...
Progressive lossless compression of medical images This paper describes a lossless compression method for medical images that produces an embedded bit-stream, allowing progressive lossy-to-lossless decoding with L-infinity oriented rate-distortion. The experimental results show that the proposed technique produces better average lossless compression results than several other compression methods, including JPEG2000, JPEG-LS and JBIG, in a publicly available medical image database containing images from several modalities.
An online preprocessing technique for improving the lossless compression of images with sparse histograms This letter addresses the problem of improving the efficiency of lossless compression of images with sparse histograms. An online preprocessing technique is proposed, which, although very simple, is able to provide significant improvements in the compression ratio of the images that it targets and shows a good robustness on other images.
A low-complexity modeling approach for embedded coding of wavelet coefficients We present a new low-complexity method for modeling and coding the bitplanes of a wavelet-transformed image in a fully embedded fashion. The scheme uses a simple ordering model for embedding, based on the principle that coefficient bits that are likely to reduce the distortion the most should be described first in the encoded bitstream. The ordering model is tied to a conditioning model in a way that deinterleaves the conditioned subsequences of coefficient bits, making them amenable to coding with a very simple, adaptive elementary Golomb (1966) code. The proposed scheme, without relying on zerotrees or arithmetic coding, attains PSNR vs. bit rate performance superior to that of SPIHT, and competitive with its arithmetic coding variant, SPIHT-AC
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Distributed garbage collection using reference counting We describe here an elegant algorithm for the real-time garbage collection of distributed memory. This algorithm makes use of reference counting and is simpler than distributed mark-scan algorithms. It is also truly real-time unlike distributed mark-scan algorithms. It requires no synchronisation between messages and only sends a message between nodes when a reference is deleted. It is also relatively space efficient using at most five bits per reference.
Ant Algorithms for Discrete Optimization This article presents an overview of recent work on ant algorithms, that is, algorithms for discrete optimization that took inspiration from the observation of ant colonies' foraging behavior, and introduces the ant colony optimization (ACO) metaheuristic. In the first part of the article the basic biological findings on real ants are reviewed and their artificial counterparts as well as the ACO metaheuristic are defined. In the second part of the article a number of applications of ACO algorithms to combinatorial optimization and routing in communications networks are described. We conclude with a discussion of related work and of some of the most important aspects of the ACO metaheuristic.
Experience with Formal Methods in Critical Systems Although there are indisputable benefits to society from the introduction of computers into everyday life, some applications are inherently risky. Worldwide, regulatory agencies are examining how to assure safety and security. This study reveals the applicability and limitations of formal methods.
SADT<supscrpt>@@@@</supscrpt> /SAINT: Large scale analysis simulation methodology SADT/SAINT is a highly structured, top-down simulation methodology for defining, analyzing, communicating, and documenting large-scale systems. Structured Analysis and Design Technique (SADT), developed by SofTech, provides a functional representation and a data model of the system that is used to define and communicate the system. System Analysis of Integrated Networks of Tasks (SAINT), currently used by the USAF, is a simulation technique for designing and analyzing man-machine systems but is applicable to a wide range of systems. By linking SADT with SAINT, large-scale systems can be defined in general terms, decomposed to the necessary level of detail, translated into SAINT nomenclature, and implemented into the SAINT program. This paper describes the linking of SADT and SAINT resulting in an enhanced total simulation capability that integrates the analyst, user, and management.
A framework for analyzing and testing requirements with actors in conceptual graphs Software has become an integral part of many people's lives, whether knowingly or not. One key to producing quality software in time and within budget is to efficiently elicit consistent requirements. One way to do this is to use conceptual graphs. Requirements inconsistencies, if caught early enough, can prevent one part of a team from creating unnecessary design, code and tests that would be thrown out when the inconsistency was finally found. Testing requirements for consistency early and automatically is a key to a project being within budget. This paper will share an experience with a mature software project that involved translating software requirements specification into a conceptual graph and recommends several actors that could be created to automate a requirements consistency graph.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.048689
0.055238
0.042349
0.028987
0.009524
0.00032
0.000008
0
0
0
0
0
0
0
A Load-Balanced Algorithm For Parallel Digital Image Warping This paper introduces and compares three parallel algorithms to compute general geometric image transformations on MIMD machines. We propose three variants of a parallel general scheme. We focus on the load balancing and the data redistributions. Experimental results are reported and compared. The implementation has been done using PPCMa library allowing us to run the program over different parallel machines.We compare logical communication schemes for message-passing machines. Since our parallel algorithm needs global communications such as multiscatters, we study the efficiency of two different logical topologies usable with PPCM.These studies allow us to find the best combination of algorithm and virtual topology to use on a given parallel machine.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Knowledge Acquisition from Multiple Experts Based on Semantics of Concepts This paper presents one approach to acquire knowledge from multiple experts. The experts are grouped into a multilevel hierarchical structure, according to the type of knowledge acquired. The first level consists of experts who have knowledge about the basic objects and their relationships. The second level of experts includes those who have knowledge about the relationships of the experts at the first level and each higher level accordingly. We show how to derive the most supported opinion among the experts at each level. This is used to order the experts into categories of their competence defined as the support they get from their colleagues.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Sofspec - A Pragmatic Approach To Automated Specification Verification This paper describes a system for the automatic verification of commerical application specifications—SOFSPEC. After having established a relationship to the other requirement specification approaches, the user interface and the database schema are presented. The database schema is based on the entity/relationship model and encompasses four entities and six relationships with a varying number of attributes. These are briefly outlined. Then, the paper describes how these entities and relations are checked against one another in order to ascertain the completeness and consistency of the specification before it is finally documented.
The Use of the Entity-Relationship Model as a Schema for Organizing the Data Processing Activities
For large meta information of national integrated statistics Integrated statistics, synthesized from many survey statistics, form an important part of government statistics. Its typical example is the System of National Accounts. To develop such a system, it is necessary to make consistent preparation of 1) documents of methods, 2) programs, and 3) a database. However, it is usually not easy because of the large amount of data types connected with the system. In this paper, we formulate a language as a means to supporting the design of statistical data integration. This language is based on the data abstraction model and treats four types of semantic hierarchies; generalization, derivation, association (aggregation) and classification. We demonstrate that this language leads to natural documentation of statistical data integration, and meta information, used in both programs and a database for the integration, can be generated from the documents.
The development and application of data base design tools and methodology
SA-ER: A Methodology that Links Structured Analysis and Entity-Relationship Modeling for Database Design
An integrated modeling environment based on attributed graphs and graph-grammars Different types of graphs are widely used to represent many types of management science models. Examples include vehicle routing, production planning, simulation and decision trees. In previous work, the author has developed tools and techniques based on graph-grammars to provide interfaces for such models. In this paper, we explore several such models with particular emphasis on integrating different graph-based models within a single environment. It is shown how the environment can combine a variety of visual models in different ways.
From Organization Models to System Requirements: A 'Cooperating Agents' Approach
Software requirements: Are they really a problem? Do requirements arise naturally from an obvious need, or do they come about only through diligent effort—and even then contain problems? Data on two very different types of software requirements were analyzed to determine what kinds of problems occur and whether these problems are important. The results are dramatic: software requirements are important, and their problems are surprisingly similar across projects. New software engineering techniques are clearly needed to improve both the development and statement of requirements.
Requirements engineering with viewpoints. The requirements engineering process involves a clear understanding of the requirements of the intended system. This includes the services required of the system, the system users, its environment and associated constraints. This process involves the capture, analysis and resolution of many ideas, perspectives and relationships at varying levels of detail. Requirements methods based on global reasoning appear to lack the expressive framework to adequately articulate this distributed requirements knowledge structure. The paper describes the problems in trying to establish an adequate and stable set of requirements and proposes a viewpoint-oriented requirements definition (VORD) method as a means of tackling some of these problems. This method structures the requirements engineering process using viewpoints associated with sources of requirements. The paper describes VORD in the light of current viewpoint-oriented requirements approaches and shows how it improves on them. A simple example of a bank auto-teller system is used to demonstrate the method.
Representing open requirements with a fragment-based specification The paper describes and evaluates an alternative representation scheme for software applications in which the requirements are poorly understood or dynamic (i.e., open). The discussion begins with a classification of requirements specification properties and their relationship to the software process. Emphasis is placed on the representation schemes that are most appropriate for projects with open requirements in a flexible development setting. Fragment-based specifications, which capture a conceptual model of the product under development, are recommended for such applications. The paper describes an environment, in production use since 1980, that employs this form of specification. Evaluations of the environment's economic benefit and of the specification scheme's properties follow. A final section contains observations about the nature of future software applications and the environments necessary to support their development and maintenance
Software size estimation of object-oriented systems The strengths and weaknesses of existing size estimation techniques are discussed. The nature of software size estimation is considered. The proposed method takes advantage of a characteristic of object-oriented systems, the natural correspondence between specification and implementation, in order to enable users to come up with better size estimates at early stages of the software development cycle. Through a statistical approach the method also provides a confidence interval for the derived size estimates. The relation between the presented software sizing model and project cost estimation is also considered.
Specware: Formal Support for Composing Software
Structuring and verifying distributed algorithms We present a structuring and verification method for distributed algorithms. The basic idea is that an algorithm to be verified is stepwise transformed into a high level specification through a number of steps, so-called coarsenings. At each step some mechanism of the algorithm is identified, verified and removed while the basic computation of the original algorithm is preserved. The method is based on a program development technique called superposition and it is formalized within the refinement calculus. We will show the usefulness of the method by verifying a complex distributed algorithm for minimum-hop route maintenance due to Chu.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.200198
0.200198
0.200198
0.200198
0.066772
0.000179
0.000067
0.000044
0.000028
0.000013
0.000001
0
0
0
Asynchronous Output-Feedback Control of Networked Nonlinear Systems With Multiple Packet Dropouts: T–S Fuzzy Affine Model-Based Approach This paper investigates the problem of robust output-feedback control for a class of networked nonlinear systems with multiple packet dropouts. The nonlinear plant is represented by Takagi-Sugeno (T-S) fuzzy affine dynamic models with norm-bounded uncertainties, and stochastic variables that satisfy the Bernoulli random binary distribution are adopted to characterize the data-missing phenomenon. The objective is to design an admissible output-feedback controller that guarantees the stochastic stability of the resulting closed-loop system with a prescribed disturbance attenuation level. It is assumed that the plant premise variables, which are often the state variables or their functions, are not measurable so that the controller implementation with state-space partition may not be synchronous with the state trajectories of the plant. Based on a piecewise quadratic Lyapunov function combined with an S-procedure and some matrix inequality convexifying techniques, two different approaches to robust output-feedback controller design are developed for the underlying T-S fuzzy affine systems with unreliable communication links. The solutions to the problem are formulated in the form of linear matrix inequalities (LMIs). Finally, simulation examples are provided to illustrate the effectiveness of the proposed approaches.
Robust sliding-mode control for uncertain time-delay systems: an LMI approach This note is devoted to robust sliding-mode control for time-delay systems with mismatched parametric uncertainties. A delay-independent sufficient condition for the existence of linear sliding surfaces is given in terms of linear matrix inequalities, based on which the corresponding reaching motion controller is also developed. The results are illustrated by an example.
Delay-dependent robust H∞ control for uncertain discrete-time fuzzy systems with time-varying delays This paper deals with the robust H∞ control problem for discrete-time Takagi-Sugeno (T-S) fuzzy systems with norm-bounded parametric uncertainties and interval time-varying delays. First, based on a new Lyapunov functional, we present a sufficient condition guaranteeing that the resulting closed-loop system is robustly stable and satisfies a prescribed H∞ performance level. The Lyapunov functional used here depends on not only the fuzzy basis function but on the lower and upper bounds of the time-varying delay as well. Second, two classes of delay-dependent conditions for the existence of the concerned H∞ fuzzy controllers are given in terms of relaxed linear matrix inequalities (LMIs), and a desired controller can be designed by using the solutions to these LMIs. Finally, a numerical example is provided to demonstrate the effectiveness of the proposed design method.
Robust H∞ control of Takagi--Sugeno fuzzy systems with state and input time delays This paper addresses the robust H"~ fuzzy control problem for nonlinear uncertain systems with state and input time delays through Takagi-Sugeno (T-S) fuzzy model approach. The delays are assumed to be interval time-varying delays, and no restriction is imposed on the derivative of time delay. Based on Lyapunov-Krasoviskii functional method, delay-dependent sufficient conditions for the existence of an H"~ controller are proposed in linear matrix inequality (LMI) format. Illustrative examples are given to show the effectiveness and merits of the proposed fuzzy controller design methodology.
New approach on robust delay-dependent H∞ control for uncertain T-S fuzzy systems with interval time-varying delay This paper investigates the robust H∞ control for Takagi-Sugeno (T-S) fuzzy systems with interval time-varying delay. By employing a new and tighter integral inequality and constructing an appropriate type of Lyapunov functional, delay-dependent stability criteria are derived for the control problem. Because neither any model transformation nor free weighting matrices are employed in our theoretical derivation, the developed stability criteria significantly improve and simplify the existing stability conditions. Also, the maximum allowable upper delay bound and controller feedback gains can be obtained simultaneously from the developed approach by solving a constrained convex optimization problem. Numerical examples are given to demonstrate the effectiveness of the proposed methods.
A New Model Transformation of Discrete-Time Systems With Time-Varying Delay and Its Application to Stability Analysis. This technical note focuses on analyzing a new model transformation of uncertain linear discrete-time systems with time-varying delay and applying it to robust stability analysis. The uncertainty is assumed to be norm-bounded and the delay intervally time-varying. A new comparison model is proposed by employing a new approximation for delayed state, and then lifting method and simple Lyapunov-Krasovskii functional method are used to analyze the scaled small gain of this comparison model. This new approximation results in a much smaller error than the existing ones. Based on the scaled small gain theorem, new stability criteria are proposed in terms of linear matrix inequalities. Moreover, it is shown that the obtained conditions can be established through direct Lyapunov method. Two numerical examples are presented to illustrate the effectiveness and superiority of our results over the existing ones.
Reciprocally convex approach to stability of systems with time-varying delays Whereas the upper bound lemma for matrix cross-product, introduced by Park (1999) and modified by Moon, Park, Kwon, and Lee (2001), plays a key role in guiding various delay-dependent criteria for delayed systems, the Jensen inequality has become an alternative as a way of reducing the number of decision variables. It directly relaxes the integral term of quadratic quantities into the quadratic term of the integral quantities, resulting in a linear combination of positive functions weighted by the inverses of convex parameters. This paper suggests the lower bound lemma for such a combination, which achieves performance behavior identical to approaches based on the integral inequality lemma but with much less decision variables, comparable to those based on the Jensen inequality lemma.
Dissipativity analysis of neural networks with time-varying delays This paper focuses on the problem of delay-dependent dissipativity analysis for a class of neural networks with time-varying delays. A free-matrix-based inequality method is developed by introducing a set of slack variables, which can be optimized via existing convex optimization algorithms. Then, by employing Lyapunov functional approach, sufficient conditions are derived to guarantee that the considered neural networks are strictly ( Q , S , R ) -γ-dissipative. The conditions are presented in terms of linear matrix inequalities and can be readily checked and solved. Numerical examples are finally provided to demonstrate the effectiveness and advantages of the proposed new design techniques.
A looped-functional approach for robust stability analysis of linear impulsive systems A new functional-based approach is developed for the stability analysis of linear impulsive systems. The new method, which introduces looped functionals, considers non-monotonic Lyapunov functions and leads to LMI conditions devoid of exponential terms. This allows one to easily formulate dwell-time results, for both certain and uncertain systems. It is also shown that this approach may be applied to a wider class of impulsive systems than existing methods. Some examples, notably on sampled-data systems, illustrate the efficiency of the approach.
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
The JPEG still picture compression standard A joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG's proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT (discrete cosine transform)-based method is specified for `lossy' compression, and a predictive method for `lossless' compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. The author provides an overview of the JPEG standard, and focuses in detail on the Baseline method
Issues in automated negotiation and electronic commerce: extending the contract net framework In this paper we discuss a number of previously unaddressed issues that arise in automated ne- got/ation among self-interested agents whose rationality is bounded by computational com- plexity. These issues are presented in the con- text of iterative task allocation negotiations. First, the reasons why such agents need to be able to choose the stage and level of com- mitment dynamically are identified. A pro- tocol that allows such choices through condi- tional commitment breaking penalties is pre- sented. Next, the implications of bounded ra- tionality are analysed. Several tradeoffs be- tween allocated computation and negotiation benefits and risk are enumerated, and the ne- cessity of explicit local deliberation control is substantiated. Techniques for linking negoti- ation items and multiagent contracts are pre- sented as methods for escaping local optima in the task allocation process. Implementing both methods among self-interested bounded ratio- nal agents is discussed. Finally, the problem of message congestion among self-interested agents is described, and alternative remedies are presented.
Hierarchical Model for Analysis and Recognition of Handwritten Characters Different hierarchical models in pattern analysis and recognition are proposed, based on occurrence probability of patterns. As an important application of recognizing handprinted characters, three typical kinds of hierarchical models such asM89-89,M89-36 andM36-36 have been presented, accompanied by the computer algorithms for computing recognition rates of pattern parts. Moreover, a comparative study of their recognition rates has been conducted theoretically; and numerical experiments have been carried out to verify the analytical conclusions made. Various hierarchical models deliberated in this paper can provide users more or better choices of pattern models in practical application, and lead to a uniform computational scheme (or code). The recognition rates of parts can be improved remarkably by a suitable hierarchical model. For the modelM89-36 in which case some of the Canadian standard handprinted characters have multiple occurrence probabilities, the total mean recognition rates of the given sample may reach 120% of that by the model proposed by Li et al., and 156% of that obtained from the subjective experiments reported by Suen.
New Stability Criteria For Linear Systems With Interval Time-Varying Delays Via An Extended State Vector This paper considers the stability problem of time-delayed system with interval time-varying delays. Based on a new Lyapunov-Krasovskii functional, improved stability criteria are derived in terms of linear matrix inequalities(LMIs). By efficiently applying Jensen inequality lemma, the lower bound lemma for reciprocal convexity, and Wirtinger -based integral inequality lemma, the tighter upper bound of the derivative of the proposed Lyapunov-Krasovskii functional is obtained. A numerical examples show the effectiveness of the proposed approaches by comparison of the maximum delay bounds.
1.078547
0.100434
0.066956
0.050326
0.033599
0.014365
0.000642
0.000097
0.000019
0
0
0
0
0
Knowledge Visualization from Conceptual Structures This paper addresses the problem of automatically generating displays from conceptual graphs for visualization of the knowledge contained in them. Automatic display generation is important in validating the graphs and for communicating the knowledge they contain. Displays may be classified as literal, schematic, or pictorial, and also as static versus dynamic. At this time prototype software has been developed to generate static schematic displays of graphs representing knowledge of digital systems. The prototype software generates displays in two steps, by first joining basis displays associated with basis graphs from which the graph to be displayed is synthesized, and then assigning screen coordinates to the display elements. Other strategies for mapping conceptual graphs to schematic displays are also discussed. Keywords Visualization, Representation Mapping, Conceptual Graphs, Schematic Diagrams, Pictures
Ossa - A Conceptual Modelling System for Virtual Realities As virtual reality systems achieve new heights of visual and auditory realism, the need for improving the underlying conceptual modelling facilities becomes increasingly apparent. The Ossa system provides a media-independent modelling environment based on a production system model that uses conceptual graphs to represent both the facts and the rules. Using conceptual graphs allows for interaction with the virtual world using multiple modalities (e.g. graphics and natural language). Conceptual graphs also allow for highly expressive facts and rules, and a diagrammatic programming technique. The motivation, design, and implementation of the Ossa system are discussed.
A CG-Based Behavior Extraction System This paper defines "behavior extraction" as the act of analyzing natural language sources on digital devices, such as specifications and patents, in order to find the behaviors that these documents describe and represent those behaviors in a formal manner. These formal representations may then be used for simulation or to aid in the automatic or manual creation of models for these devices. The system described here uses conceptual graphs for these formal representations, in the semantic analysis of natural language documents, and for its word-knowledge database. This paper explores the viability of such a conceptual-graph-based system for extracting behaviors from a set of patents. The semantic analyzer is found to be a viable system for behavior extraction, now requiring the extension of its dictionary and grammar rules to make it useful in creating models.
Modelling and Simulating Human Behaviours with Conceptual Graphs This paper describes an application of conceptual graphs in knowledge engineering. We are developing an assistance system for the acquisition and the validation of stereotyped behaviour models in human organizations. The system is built on a representation language requiring to be at expert level, to have a clear semantics and to be interpretable. The proposed language is an extension of conceptual graphs dedicated to the representation of behaviours. Tools exploiting this language are provided to assist the construction of behaviour models and their simulation on concrete cases.
Learning and inferencing in user ontology for personalized Semantic Web search User modeling is aimed at capturing the users' interests in a working domain, which forms the basis of providing personalized information services. In this paper, we present an ontology based user model, called user ontology, for providing personalized information service in the Semantic Web. Different from the existing approaches that only use concepts and taxonomic relations for user modeling, the proposed user ontology model utilizes concepts, taxonomic relations, and non-taxonomic relations in a given domain ontology to capture the users' interests. As a customized view of the domain ontology, a user ontology provides a richer and more precise representation of the user's interests in the target domain. Specifically, we present a set of statistical methods to learn a user ontology from a given domain ontology and a spreading activation procedure for inferencing in the user ontology. The proposed user ontology model with the spreading activation based inferencing procedure has been incorporated into a semantic search engine, called OntoSearch, to provide personalized document retrieval services. The experimental results, based on the ACM digital library and the Google Directory, support the efficacy of the user ontology approach to providing personalized information services.
A Comparison of Languages which Operationalize and Formalise KADS Models of Expertise In the field of knowledge engineering, dissatisfaction with the rapid-prototyping approach has led to a number of more principled methodologies for the construction of knowledge-based systems. Instead of immediately implementing the gathered and interpreted knowledge in a given implementation formalism according to the rapid-prototyping approach, many such methodologies centre around the notion of a conceptual model: an abstract, implementation independent description of the relevant problem solving expertise. A conceptual model should describe the task which is solved by the system and the knowledge which is required by it. Although such conceptual models have often been formulated in an informal way, recent years have seen the advent of formal and operational languages to describe such conceptual models more precisely, and operationally as a means for model evaluation. In this paper, we study a number of such formal and operational languages for specifying conceptual models. To enable a meaningful comparison of such languages, we focus on languages which are all aimed at the same underlying conceptual model, namely that from the KADS method for building KBS. We describe eight formal languages for KADS models of expertise, and compare these languages with respect to their modelling primitives, their semantics, their implementations and their applications, Future research issues in the area of formal and operational specification languages for KBS are identified as the result of studying these languages. The paper also contains an extensive bibliography of research in this area.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Knowledge Specification of an Expert System It is proposed that knowledge specifications be used as bases for developing and maintaining expert systems. It is suggested that through knowledge acquisition, a knowledge specification representing the kinds of knowledge and reasoning processes used to perform a task can be produced. A prototype can then be built to test and improve the knowledge specification. When a stable and satisfactory specification is obtained, a production system for end users, based on the specification rather than on the prototype, can be implemented. The knowledge specification guides system changes during maintenance. An experimental study to assess and improve this methodology is reported. Prototyping is discussed, an expert system knowledge specification is presented, and a methodology for creating a knowledge specification using conceptual structures is described. The methodology is compared with a currently popular methodology for expert system development. The proposal is primarily intended for medium- to large-scale expert systems, which may have several developers and whose users will not be developing the systems.
A Pluralistic Knowledge-Based Approach to Software Specification We propose a pluralistic attitude to software specification, where multiple viewpoints/methods are integrated to enhance our understanding of the required system. In particular, we investigate how this process can be supported by heuristics acquired from well-known software specification methods such as Data Flow Diagrams, Petri Nets and Entity Relationship Models. We suggest the classification of heuristics by method and activity, and show how they can be formalised in Prolog. More general heuristics indicating complementarity consistency between methods are also formalised. A practical by-product has been the generation of "expert-assistance" to the integration of methods: PRISMA is a pluralistic knowledge-based system supporting the coherent construction of a software specification from multiple viewpoints. The approach is ilustrated via examples. Theoretical and practical issues related to specification processes and environments supporting a pluralistic paradigm are also discussed.
Integrity Checking in a Logic-Oriented ER Model
Multiview—an exploration in information systems development
Striving For Correctness In developing information technology, you want assurance that systems are secure and reliable, but you cannot have assurance or security without correctness. We discuss methods used to achieve correctness, focusing on weaknesses and approaches that management might take to increase belief in correctness. Formal methods, simulation, testing, and process modeling are addressed in detail. Structured programming, life-cycle modeling like the spiral model, use of CASE tools, use of formal methods, object-oriented design, reuse of existing code are also mentioned. Reliance on these methods involves some element of belief since no validated metrics on the effectiveness of these methods exist. Suggestions for using these methods as the basis for managerial decisions conclude the paper.
A nonlinear VQ-based predictive lossless image coder A new lossless predictive image coder is introduced and tested. The predictions are made with a nonlinear, vector quantizer based, adaptive predictor. The prediction errors are losslessly compressed with an arithmetic coder that presumes they are Laplacian distributed with variances that are estimated during the prediction process, as in the approach of Howard and Vitter (1992)
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.035743
0.0416
0.033333
0.00908
0.002222
0.000577
0.000142
0.000026
0.000002
0
0
0
0
0
Ant Algorithms for Discrete Optimization This article presents an overview of recent work on ant algorithms, that is, algorithms for discrete optimization that took inspiration from the observation of ant colonies' foraging behavior, and introduces the ant colony optimization (ACO) metaheuristic. In the first part of the article the basic biological findings on real ants are reviewed and their artificial counterparts as well as the ACO metaheuristic are defined. In the second part of the article a number of applications of ACO algorithms to combinatorial optimization and routing in communications networks are described. We conclude with a discussion of related work and of some of the most important aspects of the ACO metaheuristic.
Improving the performance of Apache Hadoop on pervasive environments through context-aware scheduling. This article proposes to improve Apache Hadoop scheduling through a context-aware approach. Apache Hadoop is the most popular implementation of the MapReduce paradigm for distributed computing, but its design does not adapt automatically to computing nodes’ context and capabilities. By introducing context-awareness into Hadoop, we intent to dynamically adapt its scheduling to the execution environment. This is a necessary feature in the context of pervasive grids, which are heterogeneous, dynamic and shared environments. The solution has been incorporated into Hadoop and assessed through controlled experiments. The experiments demonstrate that context-awareness provides comparative performance gains, especially when some of the resources disappear during execution.
A lightweight decentralized service placement policy for performance optimization in fog computing A decentralized optimization policy for service placement in fog computing is presented. The optimization is addressed to place most popular services as closer to the users as possible. The experimental validation is done in the iFogSim simulator and by comparing our algorithm with the simulator’s built-in policy. The simulation is characterized by modeling a microservice-based application for different experiment sizes. Results showed that our decentralized algorithm places most popular services closer to users, improving network usage and service latency of the most requested applications, at the expense of a latency increment for the less requested services and a greater number of service migrations.
Automatic determination of grain size for efficient parallel processing The authors propose a method for automatic determination and scheduling of modules from a sequential program.
Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without...
A historical perspective of speech recognition What do we know now that we did not know 40 years ago?
A novel method for solving the fully neutrosophic linear programming problems The most widely used technique for solving and optimizing a real-life problem is linear programming (LP), due to its simplicity and efficiency. However, in order to handle the impreciseness in the data, the neutrosophic set theory plays a vital role which makes a simulation of the decision-making process of humans by considering all aspects of decision (i.e., agree, not sure and disagree). By keeping the advantages of it, in the present work, we have introduced the neutrosophic LP models where their parameters are represented with a trapezoidal neutrosophic numbers and presented a technique for solving them. The presented approach has been illustrated with some numerical examples and shows their superiority with the state of the art by comparison. Finally, we conclude that proposed approach is simpler, efficient and capable of solving the LP models as compared to other methods.
Secure Medical Data Transmission Model for IoT-Based Healthcare Systems. Due to the significant advancement of the Internet of Things (IoT) in the healthcare sector, the security, and the integrity of the medical data became big challenges for healthcare services applications. This paper proposes a hybrid security model for securing the diagnostic text data in medical images. The proposed model is developed through integrating either 2-D discrete wavelet transform 1 level (2D-DWT-1L) or 2-D discrete wavelet transform 2 level (2D-DWT-2L) steganography technique with a proposed hybrid encryption scheme. The proposed hybrid encryption schema is built using a combination of Advanced Encryption Standard, and Rivest, Shamir, and Adleman algorithms. The proposed model starts by encrypting the secret data; then it hides the result in a cover image using 2D-DWT-1L or 2D-DWT-2L. Both color and gray-scale images are used as cover images to conceal different text sizes. The performance of the proposed system was evaluated based on six statistical parameters; the peak signal-to-noise ratio (PSNR), mean square error (MSE), bit error rate (BER), structural similarity (SSIM), structural content (SC), and correlation. The PSNR values were relatively varied from 50.59 to 57.44 in case of color images and from 50.52 to 56.09 with the gray scale images. The MSE values varied from 0.12 to 0.57 for the color images and from 0.14 to 0.57 for the gray scale images. The BER values were zero for both images, while SSIM, SC, and correlation values were ones for both images. Compared with the state-of-the-art methods, the proposed model proved its ability to hide the confidential patient's data into a transmitted cover image with high imperceptibility, capacity, and minimal deterioration in the received stego-image.
Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:[email protected]), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled...
Functional algorithm design For an adequate account of a functional approach to the principles of algorithm design we need to find new translations of classical algorithms and data structures, translations that do not compromise efficiency. For an adequate formal account of a functional approach to the specification and design of programs we need to include relations in the underlying theory. These and other points are illustrated in the context of sorting algorithms.
A superimposition control construct for distributed systems A control structure called a superimposition is proposed. The structure contains schematic abstractions of processes called roletypes in its declaration. Each roletype may be bound to processes from a basic distributed algorithm, and the operations of the roletype will then execute interleaved with those of the basic processes, over the same state space. This structure captures a kind of modularity natural for distributed programming, which previously has been treated using a macro-like implantation of code. The elements of a superimposition are identified, a syntax is suggested, correctness criteria are defined, and examples are presented.
Behavioral Subtyping, Specification Inheritance, and Modular Reasoning 2006 CR Categories: D. 2.2 [Software Engineering] Design Tools and Techniques, Object-oriented design methods; D. 2.3 [Software Engineering] Coding Tools and Techniques, Object-oriented programming; D. 2.4 [Software Engineering] Software/Program Verification, Class invariants, correctness proofs, formal methods, programming by contract, reliability, tools, Eiffel, JML; D. 2.7 [Software Engineering] Distribution, Maintenance, and Enhancement, Documentation; D. 3.1 [Programming Languages] Formal Definitions and Theory, Semantics; D. 3.2 [Programming Languages] Language Classifications, Object-oriented languages; D. 3.3 [Programming Languages] Language Constructs and Features, classes and objects, inheritance; F. 3.1 [Logics and Meanings of Programs] Specifying and Verifying and Reasoning about Programs, Assertions, invariants, logics of programs, pre-and post-conditions, specification techniques;
Refinement in Object-Z and CSP In this paper we explore the relationship between refinement in Object-Z and refinement in CSP. We prove with a simple counterexample that refinement within Object-Z, established using the standard simulation rules, does not imply failures-divergences refinement in CSP. This contradicts accepted results.Having established that data refinement in Object-Z and failures refinement in CSP are not equivalent we identify alternative refinement orderings that may be used to compare Object-Z classes and CSP processes. When reasoning about concurrent properties we need the strength of the failures-divergences refinement ordering and hence identify equivalent simulation rules for Object-Z. However, when reasoning about sequential properties it is sufficient to work within the simpler relational semantics of Object-Z. We discuss an alternative denotational semantics for CSP, the singleton failures semantic model, which has the same information content as the relational model of Object-Z.
Reversible data hiding by adaptive group modification on histogram of prediction errors. In this work, the conventional histogram shifting (HS) based reversible data hiding (RDH) methods are first analyzed and discussed. Then, a novel HS based RDH method is put forward by using the proposed Adaptive Group Modification (AGM) on the histogram of prediction errors. Specifically, in the proposed AGM method, multiple bins are vacated based on their magnitudes and frequencies of occurrences by employing an adaptive strategy. The design goals are to maximize hiding elements while minimizing shifting and modification elements to maintain image high quality by giving priority to the histogram bins utilized for hiding. Furthermore, instead of hiding only one bit at a time, the payload is decomposed into segments and each segment is hidden by modifying a triplet of prediction errors to suppress distortion. Experimental results show that the proposed AGM technique outperforms the current state-of-the-art HS based RDH methods. As a representative result, the proposed method achieves an improvement of 4.30 dB in terms of PSNR when 105,000 bits are hidden into the test Lenna image.
1.077926
0.071111
0.071111
0.058444
0.035556
0.001111
0.000444
0.000037
0
0
0
0
0
0
A conceptual framework for ASIC design An attempt is made to gain a better understanding of the nature of ASIC (application-specific integrated circuit) design. This is done from a decision-making perspective, in terms of three knowledge frames: the design process, the design hyperspace, and the design repertoire. The design process frame emphasizes the hierarchical design approach and presents the methodology as a formalization of the design process. The design hyperspace concept relates to the recognition of design alternatives. Analysis techniques for evaluating algorithmic and architectural alternatives are collected and classified to form the design repertoire. This conceptual framework is an effective instrument for bridging the widening gap between system designers and VLSI technology. It also provides a conceptual platform for the development of tools for high-level architectural designs.<>
Yoda: a framework for the conceptual design VLSI systems As the complexity of the VLSI design process grows, it becomes increasingly more costly to conduct design in a trial-and-error fashion because the number of possible design alternatives, as well as the cost of a complete synthesis and fabrication cycle, increase dramatically. A conceptual design addresses this problem by allowing the designer to conduct initial feasibility studies, giving guidance on the most promising design alternatives with a preliminary indication of estimated performance. The authors describe a general framework that supports this conceptual design and a particular instance of such a framework, called Yoda, that supports the conceptual design phase for digital signal processing filters.<>
A semantic network representation of personal construct systems A method is presented for transforming and combining heuristic knowledge gathered from multiple domain experts into a common semantic network representation. Domain expert knowledge is gathered with an interviewing tool based on personal construct theory. The problem of expressing and using a large body of knowledge is fundamental to artificial intelligence and its application to knowledge-based or expert systems. The semantic network is a powerful, general representation that has been used as a tool for the definition of other knowledge representations. Combining multiple approaches to a domain of knowledge may reinforce mutual experiences, information, facts, and heuristics, yet still retain unique, specialist knowledge gained from different experiences. An example application of the algorithm is presented in two separate expert domains
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:[email protected]), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled...
A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general.
An ontological model of an information system An ontological model of an information system that provides precise definitions of fundamental concepts like system, subsystem, and coupling is proposed. This model is used to analyze some static and dynamic properties of an information system and to examine the question of what constitutes a good decomposition of an information system. Some of the major types of information system formalisms that bear on the authors' goals and their respective strengths and weaknesses relative to the model are briefly reviewed. Also articulated are some of the fundamental notions that underlie the model. Those basic notions are then used to examine the nature and some dynamics of system decomposition. The model's predictive power is discussed.
DOODLE: a visual language for object-oriented databases In this paper we introduce DOODLE, a new visual and declarative language for object-oriented databases. The main principle behind the language is that it is possible to display and query the database with arbitrary pictures. We allow the user to tailor the display of the data to suit the application at hand or her preferences. We want the user-defined visualizations to be stored in the database, and the language to express all kinds of visual manipulations. For extendibility reasons, the language is object-oriented. The semantics of the language is given by a well-known deductive query language for object-oriented databases. We hope that the formal basis of our language will contribute to the theoretical study of database visualizations and visual query languages, a subject that we believe is of great interest, but largely left unexplored.
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.2
0.2
0
0
0
0
0
0
0
0
0
0
0
Local-Prediction-Based Difference Expansion Reversible Watermarking This paper investigates the use of local prediction in difference expansion reversible watermarking. For each pixel, a least square predictor is computed on a square block centered on the pixel and the corresponding prediction error is expanded. The same predictor is recovered at detection without any additional information. The proposed local prediction is general and it applies regardless of the predictor order or the prediction context. For the particular cases of least square predictors with the same context as the median edge detector, gradient-adjusted predictor or the simple rhombus neighborhood, the local prediction-based reversible watermarking clearly outperforms the state-of-the-art schemes based on the classical counterparts. Experimental results are provided.
Improved control for low bit-rate reversible watermarking The distortion introduced by reversible watermarking depends on the embedding bit-rate. This paper proposes a fine control of the embedding bit-rate for low capacity histogram shifting reversible watermarking. The basic idea of our approach is to split the prediction error histogram into several histograms and to ensure the fine tuning of the bit-rate by selecting the appropriate bins for each histogram. The splitting is performed as a function of the prediction context. The bins (two for each histogram, one for the right side, another for the left side of the histogram) are selected by linear programming in order to minimize the distortion introduced by the watermarking. The proposed scheme outperforms in terms of embedding distortion the prior state of the art.
A high capacity reversible data hiding scheme based on generalized prediction-error expansion and adaptive embedding In this paper, a high capacity reversible image data hiding scheme is proposed based on a generalization of prediction-error expansion (PEE) and an adaptive embedding strategy. For each pixel, its prediction value and complexity measurement are firstly computed according to its context. Then, a certain amount of data bits will be embedded into this pixel by the proposed generalized PEE. Here, the complexity measurement is partitioned into several levels, and the embedded data size is determined by the complexity level such that more bits will be embedded into a pixel located in a smoother region. The complexity level partition and the embedded data size of each level are adaptively chosen for the best performance with an advisable parameter selection strategy. In this way, the proposed scheme can well exploit image redundancy to achieve a high capacity with rather limited distortion. Experimental results show that the proposed scheme outperforms the conventional PEE and some state-of-the-art algorithms by improving both marked image quality and maximum embedding capacity.
A Unified Data Embedding and Scrambling Method Conventionally, data embedding techniques aim at maintaining high-output image quality so that the difference between the original and the embedded images is imperceptible to the naked eye. Recently, as a new trend, some researchers exploited reversible data embedding techniques to deliberately degrade image quality to a desirable level of distortion. In this paper, a unified data embedding-scrambling technique called UES is proposed to achieve two objectives simultaneously, namely, high payload and adaptive scalable quality degradation. First, a pixel intensity value prediction method called checkerboard-based prediction is proposed to accurately predict 75% of the pixels in the image based on the information obtained from 25% of the image. Then, the locations of the predicted pixels are vacated to embed information while degrading the image quality. Given a desirable quality (quantified in SSIM) for the output image, UES guides the embedding-scrambling algorithm to handle the exact number of pixels, i.e., the perceptual quality of the embedded-scrambled image can be controlled. In addition, the prediction errors are stored at a predetermined precision using the structure side information to perfectly reconstruct or approximate the original image. In particular, given a desirable SSIM value, the precision of the stored prediction errors can be adjusted to control the perceptual quality of the reconstructed image. Experimental results confirmed that UES is able to perfectly reconstruct or approximate the original image with SSIM value ${>}{0.99}$ after completely degrading its perceptual quality while embedding at 7.001 bpp on average.
General Framework to Histogram-Shifting-Based Reversible Data Hiding Histogram shifting (HS) is a useful technique of reversible data hiding (RDH). With HS-based RDH, high capacity and low distortion can be achieved efficiently. In this paper, we revisit the HS technique and present a general framework to construct HS-based RDH. By the proposed framework, one can get a RDH algorithm by simply designing the so-called shifting and embedding functions. Moreover, by taking specific shifting and embedding functions, we show that several RDH algorithms reported in the literature are special cases of this general construction. In addition, two novel and efficient RDH algorithms are also introduced to further demonstrate the universality and applicability of our framework. It is expected that more efficient RDH algorithms can be devised according to the proposed framework by carefully designing the shifting and embedding functions.
Low distortion transform for reversible watermarking. This paper proposes a low-distortion transform for prediction-error expansion reversible watermarking. The transform is derived by taking a simple linear predictor and by embedding the expanded prediction error not only into the current pixel but also into its prediction context. The embedding ensures the minimization of the square error introduced by the watermarking. The proposed transform introduces less distortion than the classical prediction-error expansion for complex predictors such as the median edge detector or the gradient-adjusted predictor. Reversible watermarking algorithms based on the proposed transform are analyzed. Experimental results are provided.
Modeling and low-complexity adaptive coding for image prediction residuals This paper elaborates on the use of discrete, two-sided geometric distribution models for image prediction residuals. After providing achievable bounds for universal coding of a rich family of models, which includes traditional image models, we present a new family of practical prefix codes for adaptive image compression. This family is optimal for two-sided geometric distributions and is an extension of the Golomb (1966) codes. Our new family of codes allows for encoding of prediction residuals at a complexity similar to that of Golomb codes, without recourse to the rough approximations used when a code designed for non-negative integers is matched to the encoding of any integer. We also provide adaptation criteria for a further simplified, sub-optimal family of codes used in practice
A mathematical perspective for software measures research Basic principles which necessarily underlie software measures research are analysed. In the prevailing paradigm for the validation of software measures, there is a fundamental assumption that the sets of measured documents are ordered and that measures should report these orders. The authors describe mathematically, the nature of such orders. Consideration of these orders suggests a hierarchy of software document measures, a methodology for developing new measures and a general approach to the analytical evaluation of measures. They also point out the importance of units for any type of measurement and stress the perils of equating document structure complexity and psychological complexity
Techniques for automatically correcting words in text Research aimed at correcting words in text has focused on three progressively more difficult problems:(1) nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction. In response to the first problem, efficient pattern-matching and n-gram analysis techniques have been developed for detecting strings that do not appear in a given word list. In response to the second problem, a variety of general and application-specific spelling correction techniques have been developed. Some of them were based on detailed studies of spelling error patterns. In response to the third problem, a few experiments using natural-language-processing tools or statistical-language models have been carried out. This article surveys documented findings on spelling error patterns, provides descriptions of various nonword detection and isolated-word error correction techniques, reviews the state of the art of context-dependent word correction techniques, and discusses research issues related to all three areas of automatic error correction in text.
Ant Algorithms for Discrete Optimization This article presents an overview of recent work on ant algorithms, that is, algorithms for discrete optimization that took inspiration from the observation of ant colonies' foraging behavior, and introduces the ant colony optimization (ACO) metaheuristic. In the first part of the article the basic biological findings on real ants are reviewed and their artificial counterparts as well as the ACO metaheuristic are defined. In the second part of the article a number of applications of ACO algorithms to combinatorial optimization and routing in communications networks are described. We conclude with a discussion of related work and of some of the most important aspects of the ACO metaheuristic.
Select Z Bibliography This bibliography contains a list of references concerned with the for- mal Z notation that are either available as published papers, books, selected tech- nical reports, or on-line. Some references on the related B-Method are also in- cluded. The bibliography is in alphabetical order by author name(s).
A compile-time scheduling heuristic for interconnection-constrained heterogeneous processor architectures The authors present a compile-time scheduling heuristic called dynamic level scheduling, which accounts for interprocessor communication overhead when mapping precedence-constrained, communicating tasks onto heterogeneous processor architectures with limited or possibly irregular interconnection structures. This technique uses dynamically-changing priorities to match tasks with processors at each step, and schedules over both spatial and temporal dimensions to eliminate shared resource contention. This method is fast, flexible, widely targetable, and displays promising performance
An algorithm for blob hierarchy layout We present an algorithm for the aesthetic drawing of basic hierarchical blob structures, of the kind found in higraphs and statecharts and in other diagrams in which hierarchy is depicted as topological inclusion. Our work could also be useful in window system dynamics, and possibly also in things like newspaper layout, etc. Several criteria for aesthetics are formulated, and we discuss their motivation, our methods of implementation and the algorithm's performance.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.027654
0.044444
0.037037
0.022222
0.005556
0.002963
0.000002
0
0
0
0
0
0
0
A Window Inference Tool for Refinement
Mechanising some Advanced Refinement Concepts We describe how proof rules for three advanced refinement features are mechanicallyverified using the HOL theorem prover. These features are data refinement, backwardsdata refinement and superposition refinement of initialised loops. We also show howapplications of these proof rules to actual program refinement can be checked using theHOL system, with the HOL system generating the verification conditions.1 IntroductionStepwise refinement is a methodology for developing programs from...
A Tactic Driven Refinement Tool
Program Transformations and Refinements in HOL
A Program Refinement Tool .   The refinement calculus for the development of programs from specifications is well suited to mechanised support. We review the requirements for tool support of refinement as gleaned from our experience with existing refinement tools, and report on the design and implementation of a new tool to support refinement based on these requirements. The main features of the new tool are close integration of refinement and proof in a single tool (the same mechanism is used for both), good management of the refinement context, an extensible theory base that allows the tool to be adapted to new application domains, and a flexible user interface.
Laws of data refinement A specification language typically contains sophisticated data types that are expensive or even impossible to implement. Their replacement with simpler or more efficiently implementable types during the programming process is called data refinement. We give a new formal definiton of data refinement and use it to derive some basic laws. The derived laws are constructive in that used in conjunction with the known laws of procedural refinement they allow us to calculate a new specification from a given one in which variables are to be replaced by other variables of a different type.
Program Transformation Systems Interest is increasing in the transformational approach to programming and in mechanical aids for supporting the program development process. Available aids range from simple editorlike devices to rather powerful interactive transformation systems and even to automatic synthesis tools. This paper reviews and classifies transformation systems and is intended to acquaint the reader with the current state of the art and provide a basis for comparing the different approaches. It is also designed to provide easy access to specific details of the various methodologies.
From Action Systems to Modular Systems Action systems are used to extend program refinement methods for sequential programs, as described in the refinement calculus, to parallel and reactive system refinement. They provide a general description of reactive systems, capable of modeling terminating, possibly aborting and infinitely repeating systems. We show how to extend the action system model to refinement of modular systems. A module may export and import variables, it may provide access procedures for other modules, and it may itself access procedures of other modules. Modules may have autonomous internal activity and may execute in parallel or in sequence. Modules may be nested within each other. They may communicate by shared variables, shared actions, a generalized form of remote procedure calls and by persistent data structures. Both synchronous and asynchronous communication between modules is supported. The paper shows how a single framework can be used for both the specification of large systems, the modular decomposition of the system into smaller units and the refinement of the modules into program modules that can be described in a standard programming language and executed on standard hardware.
An Approach to Object-Orientation in Action Systems We extend the action system formalism with a notion of objects that can be active and distributed. With this extension we can model class-based systems as action systems. Moreover, as the introduced constructs can be translated into ordinary action systems, we can use the theory developed for action systems, especially the refinement calculus, even for class-based systems. We show how inheritance can be modelled in different ways via class refinement. Refining a class with an other class within the refinement calculus ensures that the original behavior of the class is maintained throughout the refinements. Finally, we show how to reuse proofs and entire class modules in a refinement step.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Powerful Techniques for the Automatic Generation of Invariants . When proving invariance properties of programs one is facedwith two problems. The first problem is related to the necessity of provingtautologies of the considered assertion language, whereas the secondmanifests in the need of finding sufficiently strong invariants. This paperfocuses on the second problem and describes techniques for the automaticgeneration of invariants. The first set of these techniques is applicable onsequential transition systems and allows to derive so-called local ...
Understanding quality in conceptual modeling With the increasing focus on early development as a major factor in determining overall quality, many researchers are trying to define what makes a good conceptual model. However, existing frameworks often do little more than list desirable properties. The authors examine attempts to define quality as it relates to conceptual models and propose their own framework, which includes a systematic approach to identifying quality-improvement goals and the means to achieve them. The framework has two unique features: it distinguishes between goals and means by separating what you are trying to achieve in conceptual modeling from how to achieve it (it has been made so that the goals are more realistic by introducing the notion of feasibility); and it is closely linked to linguistic concepts because modeling is essentially making statements in some language.<>
A Picture from the Model-Based Testing Area: Concepts, Techniques, and Challenges Model-Based Testing (MBT) represents a feasible and interesting testing strategy where test cases are generated from formal models describing the software behavior/structure. The MBT field is continuously evolving, as it could be observed in the increasing number of MBT techniques published at the technical literature. However, there is still a gap between researches regarding MBT and its application in the software industry, mainly occasioned by the lack of information regarding the concepts, available techniques, and challenges in using this testing strategy in real software projects. This chapter presents information intended to support researchers and practitioners reducing this gap, consequently contributing to the transfer of this technology from the academia to the industry. It includes information regarding the concepts of MBT, characterization of 219 MBT available techniques, approaches supporting the selection of MBT techniques for software projects, risk factors that may influence the use of these techniques in the industry together with some mechanisms to mitigate their impact, and future perspectives regarding the MBT field.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.042921
0.035937
0.025453
0.021741
0.013333
0.003294
0.000159
0.000032
0.000008
0
0
0
0
0
Local and global analysis: complementary activities for increasing the effectiveness of requirements verification and validation This paper presents a unique approach to connecting requirements engineering activities into a process framework that can be employed to obtain quality requirements with reduced expenditures of effort and cost. It is well understood that early detection and correction of errors offers the greatest potential for improving requirements quality and avoiding cost overruns in the development of software systems. To realize the maximum benefits of this ideology, we propose a two-phase model that is novel in that it introduces the concept of verification and validation (V&V) early in the requirements life cycle. In the first phase, we perform V&V immediately following the elicitation of requirements for each individually distinct function of the system. Because the first phase focuses on capturing smaller sets of related requirements iteratively, each corresponding V&V activity is better focused for detecting and correcting errors in each requirement set. In the second phase, a complementary verification activity is initiated; the corresponding focus is on the quality of linkages between requirements sets rather than on the requirements within the sets. Consequently, this approach reduces the effort in verification and enhances the focus on the verification task. The second phase also addresses the business concerns collectively, and thereby produces requirements that are not only quality adherent, but are also business compliant. Our approach, unlike other models, has a minimal time delay between the elicitation of requirements and the execution of the V&V activities. Because of this short time gap, the stakeholders have a clearer recollection of the requirements, their context and rationale; this enhances the feedback during the V&V activities. Furthermore, our model includes activities that closely align with the effective requirements engineering processes employed in the software industry. Thus, our approach facilitates a better understanding of the flow of requirements, and provides guidance for the implementation of the requirements engineering process.This paper describes a well-defined, two-phase requirements engineering approach that incorporates the principles of early V&V to provide the benefits of reduced costs and enhanced quality requirements.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An experimental natural-language processor for generating data type specifications
Application and benefits of formal methods in software development Formal methods for software development receive much attention in research centres, but are rarely used in industry for the development of (large) software systems. One of the reasons is that little is known about the integration of formal methods in the software process, and the exact role of formal methods in the software life-cycle is still unclear. In this paper, a detailed examination is made of the application of, and the benefits resulting from, a generally applicable formal method (VDM) in a standard model for software development (DoD-STD-2167A). Currently, there is no general agreement on how formal methods should be used, but in order to analyse the use of formal methods in the software process, a clear view of such use is essential. Therefore, we show what is meant by 'using a formal method'. The different activities of DoD-STD-2167A are analysed with regard to their suitability for applying VDM and the benefits that may result from applying VDM for that activity. Based on this analysis, an overall view on the usage of formal methods in the software process is formulated.
Multiview—an exploration in information systems development
Toward synthesis from English descriptions This paper reports on a research project to design a system for automatically interpreting English specifications of digital systems in terms of design representation formalisms currently employed in CAD systems. The necessary processes involve the machine analysis of English and the synthesis of models from the specifications. The approach being investigated is interactive and consists of syntactic scanning, semantic analysis, interpretation generation, and model integration.
On Formalism in Specifications A critique of a natural-language specification, followed by presentation of a mathematical alternative, demonstrates the weakness of natural language and the strength of formalism in requirements specifications.
Expanding the utility of semantic networks through partitioning An augmentation of semantic networks is presented in which the various nodes and arcs are partitioned into "net spaces." These net spaces delimit the scopes of quantified variables, distinguish hypothetical and imaginary situations from reality, encode alternative worlds considered in planning, and focus attention at particular levels of detail.
Visual feedback for validation of informal specifications In automatically synthesizing simulation models from informal specifications, the ambiguity of natural language (English) leads to multiple interpretations The authors report on a system, called the Model Generator, which provides visual feedback showing the interpretation of specification statements that have been automatically translated to a knowledge representation called conceptual graphs. The visual feedback is based on a combination of block diagrams and Petri net graphs
STATEMATE: a working environment for the development of complex reactive systems This paper provides a brief overview of the STATE MATE system, constructed over the past three years by i-Logix - r - ., Inc., and Ad Cad Ltd. STATEMATE is a graphical working en- vironment, intended for the specification, analysis, design and documentation of large and complex reactive systems, such as real-time embedded systems, control and communication sys- tems, and interactive software. It enables a user to prepare, analyze and debug diagrammatic, yet precise, descriptions of the system under development from three inter-related points of view, capturing, structure, functionality and behavior. These views are represented by three graphical languages, the most intricate of which is the language of statecharts used to depict reactive behavior over time. In addition to the use of state- charts, the main novelty of STATEMATE is in the fact that it 'understands' the entire descriptions perfectly, to the point of being able to analyze them for crucial dynamic properties, to carry out rigorous animated executions and simulations of the described system, and to create running code automatically. These features are invaluable when it comes to the quality and reliability of the final outcome.
Conceptual modeling for data and knowledge management In order to exploit knowledge embedded in databases and to migrate from data to knowledge management environments, conceptual modeling languages must offer more expressiveness than traditional modeling languages. This paper proposes the conceptual graph formalism as such a modeling language. It shows through an example and a comparison with Telos, a semantically rich knowledge modeling language, that it is suited for that purpose. The conceptual graph formalism offers simplicity of use through its graphical components and small set of constructs and operators. It allows easy migration from database to knowledge base environments. Thus, this paper advocates its use. (C) 2000 Elsevier Science B.V. All rights reserved.
Software engineering in the twenty-first century
Miro: Visual Specification of Security Miro is a set of languages and tools that support the visual specification of file system security. Two visual languages are presented: the instance language, which allows specification of file system access, and the constraint language, which allows specification of security policies. Miro visual languages and tools are used to specify security configurations. A visual language is one whose entities are graphical, such as boxes and arrows, specifying means stating independently of any implementation the desired properties of a system. Security means file system protection: ensuring that files are protected from unauthorized access and granting privileges to some users, but not others. Tools implemented and examples of how these languages can be applied to real security specification problems are described.
Scheduling precedence graphs in systems with interprocessor communication times
Temporal predicate transforms and fair termination It is usually assumed that implementations of nondeterministic programs may resolve the nondeterminacy arbitrarily. In some circumstances, however, we may wish to assume that the implementation is in some sense fair, by which we mean that in its long-term behaviour it does not show undue bias in forever favouring some nondeterministic choices over others. Under the assumption of fairness many otherwise failing programs become terminating. We construct various predicate transformer semantics of such fairly-terminating programs. The approach is based on formulating the familiar temporal operators always, eventually, and infinitely often as predicate transformers. We use these operators to construct a framework that accommodates many kinds of fairness, including varieties of socalled weak and strong fairness in both their all-levels and top-level forms. Our formalization of the notion of fairness does not exploit the syntactic shape of programs, and allows the familiar nondeterminacy and fair nondeterminacy to be arbitrarily combined in the one program. Invariance theorems for reasoning about fairly terminating programs are proved. The semantics admits probabilistic implementations provided that unbounded fairness is excluded.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.110119
0.05
0.016667
0.01013
0.003847
0.000573
0.000015
0.000001
0
0
0
0
0
0
TreeMatrix: A Hybrid Visualization of Compound Graphs We present a hybrid visualization technique for compound graphs (i.e. networks with a hierarchical clustering defined on the nodes) that combines the use of adjacency matrices, node-link and arc diagrams to show the graph, and also combines the use of nested inclusion and icicle diagrams to show the hierarchical clustering. The graph visualized with our technique may have edges that are weighted and/or directed. We first explore the design space of visualizations of compound graphs and present a taxonomy of hybrid visualization techniques. We then present our prototype, which allows clusters (i.e. subtrees) of nodes to be grouped into matrices or split apart using a radial menu. We also demonstrate how our prototype can be used in the software engineering domain, and compare it to the commercial matrix-based visualization tool Lattix using a qualitative user study. © 2012 Wiley Periodicals, Inc.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Improved Embedding for Prediction-Based Reversible Watermarking This paper aims at reducing the embedding distortion of prediction error expansion reversible watermarking. Instead of embedding the entire expanded difference into the current pixel, the difference is split between the current pixel and its prediction context. The modification of the context generates an increase of the following prediction errors. Global optimization is obtained by tuning the amount of data embedded into context pixels. Prediction error expansion reversible watermarking schemes based on median edge detector (MED), gradient-adjusted predictor (GAP), and a simplified GAP version, SGAP, are investigated. Improvements are obtained for all the predictors. Notably good results are obtained for SGAP-based schemes. The improved SGAP appears to outperform GAP-based reversible watermarking.
Improved control for low bit-rate reversible watermarking The distortion introduced by reversible watermarking depends on the embedding bit-rate. This paper proposes a fine control of the embedding bit-rate for low capacity histogram shifting reversible watermarking. The basic idea of our approach is to split the prediction error histogram into several histograms and to ensure the fine tuning of the bit-rate by selecting the appropriate bins for each histogram. The splitting is performed as a function of the prediction context. The bins (two for each histogram, one for the right side, another for the left side of the histogram) are selected by linear programming in order to minimize the distortion introduced by the watermarking. The proposed scheme outperforms in terms of embedding distortion the prior state of the art.
A high capacity reversible data hiding scheme based on generalized prediction-error expansion and adaptive embedding In this paper, a high capacity reversible image data hiding scheme is proposed based on a generalization of prediction-error expansion (PEE) and an adaptive embedding strategy. For each pixel, its prediction value and complexity measurement are firstly computed according to its context. Then, a certain amount of data bits will be embedded into this pixel by the proposed generalized PEE. Here, the complexity measurement is partitioned into several levels, and the embedded data size is determined by the complexity level such that more bits will be embedded into a pixel located in a smoother region. The complexity level partition and the embedded data size of each level are adaptively chosen for the best performance with an advisable parameter selection strategy. In this way, the proposed scheme can well exploit image redundancy to achieve a high capacity with rather limited distortion. Experimental results show that the proposed scheme outperforms the conventional PEE and some state-of-the-art algorithms by improving both marked image quality and maximum embedding capacity.
Efficient Reversible Image Watermarking By Using Dynamical Prediction-Error Expansion Reversible watermarking is a special watermarking technique which allows one to extract both the hidden data and the exact original signal from the watermarked content. In this paper, a recently introduced reversible image watermarking method based on prediction-error expansion is further investigated and improved. Instead of taking the pixels with small prediction-error as embedding pixels (i.e., the pixels that carry watermark bits), we propose to select these pixels in a dynamical way. In fact, we can pre-calculate the embedding distortion for each possible choice of embedding pixels, and determine the one with minimal distortion. We see that, with this choice of embedding pixels, the distortion is reduced comparing with the original method, and thus, the proposed approach has a better performance. In addition, experimental results show that the novel method outperforms some state-of-the-art algorithms.
Improved rhombus interpolation for reversible watermarking by difference expansion The paper proposes an interpolation error expansion reversible watermarking algorithm. The main novelty of the paper is a modified rhombus interpolation scheme. The four horizontal and vertical neighbors are considered and, depending on their values, the interpolated pixel is computed as the average of the horizontal pixels, of the vertical pixels or of the entire set of four pixels. Experimental results are provided. The proposed scheme outperforms the results obtained by using the average on the four horizontal and vertical neighbors and the ones obtained by using well known predictors as MED or GAP.
General Framework to Histogram-Shifting-Based Reversible Data Hiding Histogram shifting (HS) is a useful technique of reversible data hiding (RDH). With HS-based RDH, high capacity and low distortion can be achieved efficiently. In this paper, we revisit the HS technique and present a general framework to construct HS-based RDH. By the proposed framework, one can get a RDH algorithm by simply designing the so-called shifting and embedding functions. Moreover, by taking specific shifting and embedding functions, we show that several RDH algorithms reported in the literature are special cases of this general construction. In addition, two novel and efficient RDH algorithms are also introduced to further demonstrate the universality and applicability of our framework. It is expected that more efficient RDH algorithms can be devised according to the proposed framework by carefully designing the shifting and embedding functions.
Watermarking digital image and video data. A state-of-the-art overview The authors begin by discussing the need for watermarking and the requirements. They go on to discuss digital watermarking techniques based on correlation and techniques that are not based on correlation
Efficient spatial-spectral compression of hyperspectral data Mean-normalized vector quantization (M-NVQ) has been demonstrated to be the preferred technique for lossless compression of hyperspectral data. In this paper, a jointly optimized spatial M-NVQ/spectral DCT technique is shown to produce compression ratios significantly better than those obtained by the optimized spatial M-NVQ technique alone
On ordering color maps for lossless predictive coding Linear predictive techniques perform poorly when used with color-mapped images where pixel values represent indices that point to color values in a look-up table. Reordering the color table, however, can lead to a lower entropy of prediction errors. In this paper, we investigate the problem of ordering the color table such that the absolute sum of prediction errors is minimized. The problem turns out to be intractable, even for the simple case of one-dimensional (1-D) prediction schemes. We give two heuristic solutions for the problem and use them for ordering the color table prior to encoding the image by lossless predictive techniques. We demonstrate that significant improvements in actual bit rates can be achieved over dictionary-based coding schemes that are commonly employed for color-mapped images
A mathematical perspective for software measures research Basic principles which necessarily underlie software measures research are analysed. In the prevailing paradigm for the validation of software measures, there is a fundamental assumption that the sets of measured documents are ordered and that measures should report these orders. The authors describe mathematically, the nature of such orders. Consideration of these orders suggests a hierarchy of software document measures, a methodology for developing new measures and a general approach to the analytical evaluation of measures. They also point out the importance of units for any type of measurement and stress the perils of equating document structure complexity and psychological complexity
Towards a Compositional Method for Coordinating Gamma Programs With the growing complexity of software, incurred by the widespread acceptanceof parallel and distributed computer systems and networks, program designwould benefit from clearly separating the correctness issues (the computation) fromefficiency issues (the coordination). Gamma has shown to be a powerful and expressiveprogramming model that allows the basic computations of a program tobe expressed with a minimum of control. This enables the programmer to deferefficiency related decisions...
Agent-based support for communication between developers and users in software design Research in knowledge-based software engineering has led to advances in the ability to specify and automatically generate software. Advances in the support of upstream activities have focussed on assisting software developers. We examine the possibility of extending computer-based support in the software development process to allow end users to participate, providing feedback directly to devel- opers. The approach uses the notion of "agents" devel- oped in artificial intelligence research and concepts of participatory design. Namely, agents monitor end users working with prototype systems and report mismatches between developers' expectations and a system's actual usage. At the same time, the agents provide end users with an opportunity to communicate with developers, either synchronously or asynchronously. The use of agents is based on actual software development experiences.
The Skip-Innovation Model for Sparse Images On sparse images, contiguous runs of identical symbols often occur in the same coding context. This paper proposes a model for efficiently encoding such runs in a two-dimensional setting. Because it is model based, the method can be used with any coding scheme. An experimental coder using the model compresses the CCITT fax documents 2% better than JBIG and is more than three times as fast. A low complexity application of the model is shown to dramatically improve the compression performance of JPEG-LS on structured material.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.012113
0.018182
0.012121
0.011369
0.01059
0.006614
0.000155
0.000038
0.000004
0
0
0
0
0
Microanalysis: Acquiring Database Semantics in Conceptual Graphs Relational databases are in widespread use, yet they suffer from serious limitations when one uses them for reasoning about real-world enterprises. This is due to the fact that database relations possess no inherent semantics. This paper describes an approach called microanalysis that we have used to effectively capture database semantics represented by conceptual graphs. The technique prescribes a manual knowledge acquisition process whereby each relation schema is captured in a single conceptual graph. The schema's graph can then easily be instantiated for each tuple in the database forming a set of graphs representing the entire database's semantics. Although our technique originally was developed to capture semantics in a restricted domain of interest, namely database inference detection, we believe that domain-directed microanalysis is a general approach that can be of significant value for databases in many domains. We describe the approach and give a brief example.
Managing Multiple Requirements Perspectives with Metamodels Stakeholder conflicts can be productive in requirements engineering. A requirements-engineering project should ensure that crucial requirements are captured from at least two perspectives, preferably in a notation of the customer's choosing. Capturing, monitoring, and resolving multiple perspectives is difficult and time-consuming when done by hand. Our experience with ConceptBase, a meta-data-management system, shows that a simple but customizable metamodeling approach, combined with an advanced query facility, produces higher quality requirements documents in less time. Our experience shows that conceptual metamodeling technology can be a valuable complement to informal teamwork methods of business analysis and requirements engineering. In particular, the use of representations and cross-perspective analysis can help identify a wide variety of conflicts and, perhaps more important, monitor them.
A Framework For Integrating Multiple Perspectives In System-Development - Viewpoints This paper outlines a framework which supports the use of multiple perspectives in system development, and provides a means for developing and applying systems design methods. The framework uses "viewpoints" to partition the system specification. the development method and the formal representations used to express the system specifications. This VOSE (viewpoint-oriented systems engineering) framework can be used to support the design of heterogeneous and composite systems. We illustrate the use of the framework with a small example drawn from composite system development and give an account of prototype automated tools based on the framework.
Requirements Validation Through Viewpoint Resolution A specific technique-viewpoint resolution-is proposed as a means of providing early validation of the requirements for a complex system, and some initial empirical evidence of the effectiveness of a semi-automated implementation of the technique is provided. The technique is based on the fact that software requirements can and should be elicited from different viewpoints, and that examination of the differences resulting from them can be used as a way of assisting in the early validation of requirements. A language for expressing views from different viewpoints and a set of analogy heuristics for performing a syntactically oriented analysis of views are proposed. This analysis of views is capable of differentiating between missing information and conflicting information, thus providing support for viewpoint resolution.
Understanding the requirements for developing open source software systems This study presents an initial set of findings from an empirical study of social processes, technical system configurations, organizational contexts, and interrelationships that give rise to open software. The focus is directed at understanding the requirements for open software development efforts, and how the development of these requirements differs from those traditional to software engineering and requirements engineering. Four open software development communities are described, examined, and compared to help discover what these differences may be. Eight kinds of software informalisms are found to play a critical role in the elicitation, analysis, specification, validation, and management of requirements for developing open software systems. Subsequently, understanding the roles these software informalisms take in a new formulation of the requirements development process for open source software is the focus of this study. This focus enables considering a reformulation of the requirements engineering process and its associated artifacts or (in)formalisms to better account for the requirements for developing open source software systems.
Multistage negotiation for distributed constraint satisfaction A cooperation paradigm and coordination protocol for a distributed planning system consisting of a network of semi-autonomous agents with limited internode communication and no centralized control is presented. A multistage negotiation paradigm for solving distributed constraint satisfaction problems in this kind of system has been developed. The strategies presented enable an agent in a distributed planning system to become aware of the extent to which its own local decisions may have adverse nonlocal impact in planning. An example problem is presented in the context of transmission path restoration for dedicated circuits in a communications network. Multistage negotiation provides an agent with sufficient information about the impact of local decisions on a nonlocal state so that the agent may make local decisions that are correct from a global perspective, without attempting to provide a complete global state to all agents. Through multistage negotiation, an agent is able to recognize when a set of global goals cannot be satisfied, and is able to solve a related problem by finding a way of satisfying a reduced set of goals
Integrating Specifications: A Similarity Reasoning Approach. Requirements analysis usually results in a set of different specifications for the same system, which must be integrated.Integration involves the detection and elimination of discrepancies between them. Discrepancies may be due to differencesin representation models, modeling perspectives or practices. As instances of the semantic heterogeneity problem (D.Gangopadhyay and T. Barsalou 1991), discrepancies are broader than logical inconsistencies, and therefore not alwaysdetectable using...
Managing requirements inconsistency with development goal monitors Managing the development of software requirements can be a complex and difficult task. The environment is often chaotic. As analysts and customers leave the project, they are replaced by others who drive development in new directions. As a result, inconsistencies arise. Newer requirements introduce inconsistencies with older requirements. The introduction of such requirements inconsistencies may violate stated goals of development. In this article, techniques are presented that manage requirements document inconsistency by managing inconsistencies that arise between requirement development goals and requirements development enactment. A specialized development model, called a requirements dialog meta-model, is presented. This meta-model defines a conceptual framework for dialog goal definition, monitoring, and in the case of goal failure, dialog goal reestablishment. The requirements dialog meta-model is supported in an automated multiuser World Wide Web environment, called DealScribe. An exploratory case study of its use is reported. This research supports the conclusions that: 1) an automated tool that supports the dialog meta-model can automate the monitoring and reestablishment of formal development goals, 2) development goal monitoring can be used to determine statements of a development dialog that fail to satisfy development goals, and 3) development goal monitoring can be used to manage inconsistencies in a developing requirements document. The application of DealScribe demonstrates that a dialog meta-model can enable a powerful environment for managing development and document inconsistencies.
Software Requirements Analysis for Real-Time Process-Control Systems A set of criteria is defined to help find errors in, software requirements specifications. Only analysis criteria that examine the behavioral description of the computer are considered. The behavior of the software is described in terms of observable phenomena external to the software. Particular attention is focused on the properties of robustness and lack of ambiguity. The criteria are defined using an abstract state-machine model for generality. Using these criteria, analysis procedures can be defined for particular state-machine modeling languages to provide semantic analysis of real-time process-control software requirements.
A Study of 12 Specifications of the Library Problem The author studies twelve specifications for a seemingly simple database problem and demonstrates many approaches for classifying informally stated problem requirements. She compares the specifications according to how they address problems of the library example to illustrate the imprecision of natural-language specifications and how twelve different approaches to the same set of informal requirements reveal many of the same problems. The comparison suggests which issues should be addressed in refining an informal set of requirements and shows how these issues are resolved in different specification approaches.
A distributed algorithm to implement n-party rendezvous The concept of n-party rendezvous has been proposed to implement synchronous communication among an arbitrary number of concurrent, asynchronous processes. The problem of implementing n-party rendezvous captures two central issues in the design of distributed systems: exclusion and synchronization. This paper describes a simple, distributed algorithm, referred to as the event manager algorithm, to implement n-party rendezvous. It also compares the performance of this algorithm with an existing algorithm for this problem.
Planarity for Clustered Graphs In this paper, we introduce a new graph model known as clustered graphs, i.e. graphs with recursive clustering structures. This graph model has many applications in informational and mathematical sciences. In particular, we study C-planarity of clustered graphs. Given a clustered graph, the C-planarity testing problem is to determine whether the clustered graph can be drawn without edge crossings, or edge-region crossings. In this paper, we present efficient algorithms for testing C-planarity and finding C-planar embeddings of clustered graphs.
The role of task analysis in capturing requirements for interface design Recently, the role of task analysis in design has been brought into question. It has been argued, for example, that task analysis leads to the non-creative redesign of existing artefacts. In this paper. we offer a view of task analysis that resolves this problem. In particular, we argue that by focusing upon the analysis of user/operator goals rather than an existing task implementation, task anal...
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.205225
0.02332
0.013744
0.006527
0.0024
0.001532
0.000893
0.000505
0.000255
0.000056
0.000001
0
0
0
Distributed Cognition in the Management of Design Requirements. In this position statement, we outline a new theoretical framework of the distribution of design requirements processes. Building upon the Theory of Distributed Cognition, we characterize contemporary requirements efforts as distributed cognitive systems in which elements of a design vision are distributed socially, structurally, and temporally. We discuss the various forms of distribution observed in real-world systems development projects and the processes by which representational states are propagated through the system. We conclude with a brief discussion of the implications of the framework for requirements research and practice.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Clustering Granular Data and Their Characterization With Information Granules of Higher Type
Fuzzy clustering analysis for optimizing fuzzy membership functions Fuzzy model identification is an application of fuzzy inference system for identifying unknown functions, for a given set of sampled data. The most important thing for fuzzy identification task is to decide the parameters of membership functions (MFs) used in fuzzy systems. A lot of efforts (Chung and Lee, 1994; Jang, 1993; Sun and Jang, 1993) have been given to initialize the parameters of fuzzy membership functions. However, the problems of parameter identification were not solved formally. Assessments of these algorithms are discussed in the paper. Based on the fuzzy c-means (FCM) Bezdek (1987) clustering algorithm, we propose a heuristic method to calibrate the fuzzy exponent iteratively. A hybrid learning algorithm for refining the system parameters is then presented. Examples are demonstrated to show the effectiveness of the proposed method, comparing with the equalized universe method (EUM) and subtractive clustering method (SCM) Chiu (1994). The simulation results indicate the general applicability of our methods to a wide range of applications.
Collaborative clustering with the use of Fuzzy C-Means and its quantification In this study, we introduce the concept of collaborative fuzzy clustering-a conceptual and algorithmic machinery for the collective discovery of a common structure (relationships) within a finite family of data residing at individual data sites. There are two fundamental features of the proposed optimization environment. First, given existing constraints which prevent individual sites from exchanging detailed numeric data, any communication has to be realized at the level of information granules. The specificity of these granules impacts the effectiveness of ensuing collaborative activities. Second, the fuzzy clustering realized at the level of the individual data site has to constructively consider the findings communicated by other sites and act upon them while running the optimization confined to the particular data site. Adhering to these two general guidelines, we develop a comprehensive optimization scheme and discuss its two-phase character in which the communication phase of the granular findings intertwines with the local optimization being realized at the level of the individual site and exploits the evidence collected from other sites. The proposed augmented form of the objective function is essential in the navigation of the overall optimization that has to be completed on a basis of the data and available information granules. The intensity of collaboration is optimized by choosing a suitable tradeoff between the two components of the objective function. The objective function based clustering used here concerns the well-known Fuzzy C-Means (FCM) algorithm. Experimental studies presented include some synthetic data, selected data sets coming from the machine learning repository and the weather data coming from Environment Canada.
Design of information granule-oriented RBF neural networks and its application to power supply for high-field magnet To realize effective modeling and secure accurate prediction abilities of models for power supply for high-field magnet (PSHFM), we develop a comprehensive design methodology of information granule-oriented radial basis function (RBF) neural networks. The proposed network comes with a collection of radial basis functions, which are structurally as well as parametrically optimized with the aid of information granulation and genetic algorithm. The structure of the information granule-oriented RBF neural networks invokes two types of clustering methods such as K-Means and fuzzy C-Means (FCM). The taxonomy of the resulting information granules relates to the format of the activation functions of the receptive fields used in RBF neural networks. The optimization of the network deals with a number of essential parameters as well as the underlying learning mechanisms (e.g., the width of the Gaussian function, the numbers of nodes in the hidden layer, and a fuzzification coefficient used in the FCM method). During the identification process, we are guided by a weighted objective function (performance index) in which a weight factor is introduced to achieve a sound balance between approximation and generalization capabilities of the resulting model. The proposed model is applied to modeling power supply for high-field magnet where the model is developed in the presence of a limited dataset (where the small size of the data is implied by high costs of acquiring data) as well as strong nonlinear characteristics of the underlying phenomenon. The obtained experimental results show that the proposed network exhibits high accuracy and generalization capabilities.
From fuzzy data analysis and fuzzy regression to granular fuzzy data analysis This note offers some personal views on the two pioneers of fuzzy sets, late Professors Hideo Tanaka and Kiyoji Asai. The intent is to share some personal memories about these remarkable researchers and humans, highlight their long-lasting research accomplishments and stress a visible impact on the fuzzy set community.The note elaborates on new and promising research avenues initiated by fuzzy regression and identifies future developments of these models emerging within the realm of Granular Computing and giving rise to a plethora of granular fuzzy models and higher-order and higher-type granular constructs.
Designing Fuzzy Sets With the Use of the Parametric Principle of Justifiable Granularity The study is concerned with a design of membership functions of fuzzy sets. The membership functions are formed in such a way so that they are experimentally justifiable and exhibit a sound semantics. These two requirements are articulated through the principle of justifiable granularity. The parametric version of the principle is discussed in detail. We show linkages with type-2 fuzzy sets, which are constructed on a basis of type-1 fuzzy sets. Several experimental studies are reported, which illustrate a behavior of the introduced method.
Building the fundamentals of granular computing: A principle of justifiable granularity The study introduces and discusses a principle of justifiable granularity, which supports a coherent way of designing information granules in presence of experimental evidence (either of numerical or granular character). The term ''justifiable'' pertains to the construction of the information granule, which is formed in such a way that it is (a) highly legitimate (justified) in light of the experimental evidence, and (b) specific enough meaning it comes with a well-articulated semantics (meaning). The design process associates with a well-defined optimization problem with the two requirements of experimental justification and specificity. A series of experiments is provided as well as a number of constructs carried for various formalisms of information granules (intervals, fuzzy sets, rough sets, and shadowed sets) are discussed as well.
List processing in real time on a serial computer A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical implementations of list processing systems lack this property because allocating a list cell from the heap may cause a garbage collection, which process requires time proportional to the heap size to finish. A real-time list processing system is presented which continuously reclaims garbage, including directed cycles, while linearizing and compacting the accessible cells into contiguous locations to avoid fragmenting the free storage pool. The program is small and requires no time-sharing interrupts, making it suitable for microcode. Finally, the system requires the same average time, and not more than twice the space, of a classical implementation, and those space requirements can be reduced to approximately classical proportions by compact list representation. Arrays of different sizes, a program stack, and hash linking are simple extensions to our system, and reference counting is found to be inferior for many applications.
Scikit-learn: Machine Learning in Python Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net.
Specifying dynamic support for collaborative work within WORLDS In this paper, we present a specification language developed for WORLDS, a next generation computer-supported collaborative work system. Our specification language, called Introspect, employs a meta-level architecture to allow run-time modifications to specifications. We believe such an architecture is essential to WORLDS' ability to provide dynamic support for collaborative work in an elegant fashion.
Refinement of State-Based Concurrent Systems The traces, failures, and divergences of CSP can be expressed as weakest precondition formulæ over action systems. We show how such systems may be refined up to failures-divergences, by giving two proof methods which are sound and jointly complete: forwards and backwards simulations. The technical advantage of our weakest precondition approach over the usual relational approach is in our simple handling of divergence; the practical advantage is in the fact that the refinement calculus for sequential programs may be used to calculate forwards simulations. Our methods may be adapted to state-based development methods such as VDM or Z.
Reasoning with Background Knowledge - A Three-Level Theory
Abstraction of objects by conceptual clustering Very bound to the logic of first-rate predicates, the formalism of conceptual graphs constitutes a knowledge representation language. The abstraction of systems presents several advantages. It helps to render complex systems more understandable, thus facilitating their analysis and their conception. Our approach of conceptual graphs abstraction, or conceptual clustering, is based on rectangular decomposition. It produces a set of clusters representing similarities between subsets of objects to be abstracted, organized into a hierarchy of classes: the Knowledge Space. Some conceptual clustering methods already exist. Our approach is distinguishable from other approaches in as far as it allows a gain in space and time.
MoMut::UML Model-Based Mutation Testing for UML
1.205878
0.205878
0.205878
0.205878
0.205878
0.104408
0.026112
0
0
0
0
0
0
0
Decay-rate-dependent conditions for exponential stability of stochastic neutral systems with Markovian jumping parameters. This note studies the problem of decay-rate-dependent exponential stability for neutral stochastic delay systems with Markovian jumping parameters. First, by introducing an operator D(xt,i) as well as a novel LyapunovKrasovskii functional, sufficient conditions for exponential stability of system with a decay rate are obtained. Second, the results are extended to the robust exponential estimates for uncertain neutral stochastic delay systems with Markovian jumping parameters. Finally, numerical examples are provided to show the effectiveness of the proposed results.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Designs with Angelic Nondeterminism Hoare and He's Unifying Theories of Programming (UTP) are a predicative relational framework for the definition and combination of refinement languages for a variety of programming paradigms. Previous work has defined a theory for angelic nondeterminism in the UTP; this is basically an encoding of binary multirelations in a predicative model. In the UTP a theory of designs (pre and postcondition pairs) provides, not only a model of terminating programs, but also a stepping stone to define a theory for state-rich reactive processes. In this paper, we cast the angelic nondeterminism theory of the UTP as a theory of designs with the long-term objective of providing a model for well established refinement process algebras like Communicating Sequential Processes (CSP) and Circus.
Algebras for correctness of sequential computations. Previous work gives algebras for uniformly describing correctness statements and calculi in various relational and matrix-based computation models. These models support a single kind of non-determinism, which is either angelic, demonic or erratic with respect to the infinite executions of a computation. Other models, notably isotone predicate transformers or up-closed multirelations, offer both angelic and demonic choice with respect to finite executions. We propose algebras for a theory of correctness which covers these multirelational models in addition to relational and matrix-based models. Existing algebraic descriptions, in particular general refinement algebras and monotonic Boolean transformers, are instances of our theory. Our new description includes a precondition operation that instantiates to both modal diamond and modal box operators. We verify all results in Isabelle, heavily using its automated theorem provers. We integrate our theories with the Isabelle theory of monotonic Boolean transformers making our results applicable to that setting.
Angelicism in the Theory of Reactive Processes.
A tutorial introduction to CSP in unifying theories of programming In their Unifying Theories of Programming (UTP), Hoare & He use the alphabetised relational calculus to give denotational semantics to a wide variety of constructs taken from different programming paradigms. In this chapter, we give a tutorial introduction to the semantics of CSP processes, as presented in Chapter 3. We start with a summarised introduction of the alphabetised relational calculus and the theory of designs, which are pre-post specifications in the style of specification statements. Afterwards, we present in detail a theory for reactive processes. Later, we combine the theories of designs and reactive processes to provide the model for CSP processes. Finally, we compare this new model with the standard failures-divergences model for CSP. In the next section, we give an overview of the UTP, and in Section 2 we present its most general theory: the alphabetised predicates. In the following section, we establish that this theory is a complete lattice. Section 4 restricts the general theory to designs. Section 5 presents the theory of reactive processes; Section 6 contains our treatment of CSP processes; and Section 7 relates our model to Roscoe's standard model. We summarise the work in Section 8.
Dual unbounded nondeterminacy, recursion, and fixpoints In languages with unbounded demonic and angelic nondeterminacy, functions acquire a surprisingly rich set of fixpoints. We show how to construct these fixpoints, and describe which ones are suitable for giving a meaning to recursively defined functions. We present algebraic laws for reasoning about them at the language level, and construct a model to show that the laws are sound. The model employs a new kind of power domain-like construct for accommodating arbitrary nondeterminacy.
A theoretical basis for stepwise refinement and the programming calculus A uniform treatment of specifications, programs, and programming is presented. The treatment is based on adding a specification statement to a given procedural language and defining its semantics. The extended language is thus a specification language and programs are viewed as a subclass of specifications. A partial ordering on specifications/programs corresponding to ‘more defined’ is defined. In this partial ordering the program/specification hybrids that arise in the construction of a program by stepwise refinement form a monotonic sequence. We show how Dijkstra's calculus for the derivation of programs corresponds to constructing this monotonic sequence. Formalizing the calculus thus gives some insight into the intellectual activity it demands and allows us to hint at further developments.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Facile: a symmetric integration of concurrent and functional programming Facile is a symmetric integration of concurrent and functional programming. The language supports both function and process abstraction. Functions may be defined and used within processes, and processes can be dynamically created during expression evaluation. In this work we present two different descriptions of the operational semantics of Facile. First, we develop a structural operational semantics for a small core subset of Facile using a labeled transition system. Such a semantics is useful for reasoning about the operational behavior of Facile programs. We then provide an abstract model of implementation for Facile: theConcurrent and Functional Abstract Machine (C-FAM). The C-FAM executes concurrent processes evaluating functional expressions. The implementation semantics includes compilation rules from Facile to C-FAM instructions and execution rules for the abstract machine. This level of semantic description is suitable for those interested in implementations.
Generating test cases for real-time systems from logic specifications We address the problem of automated derivation of functional test cases for real-time systems, by introducing techniques for generating test cases from formal specifications written in TRIO, a language that extends classical temporal logic to deal explicitly with time measures. We describe an interactive tool that has been built to implement these techniques, based on interpretation algorithms of the TRIO language. Several heuristic criteria are suggested to reduce drastically the size of the test cases that are generated. Experience in the use of the tool on real-life cases is reported.
Resolving Goal Conflicts via Negotiation In non-cooperative multi-agent planning, resolution of multiple conflicting goals is the result of finding compromise solutions. Previous research has dealt with such multi-agent problems where planning goals are well-specified, subgoals can be enumerated, and the utilities associated with subgoals known. Our research extends the domain of problems to include non-cooperative multi-agent interactions where planning goals are ill-specified, subgoals cannot be enumerated, and the associated utilities are not precisely known. We provide a model of goal conflict resolution through negotiation implemented in the PERSUADER, a program that resolves labor disputes. Negotiation is performed through proposal and modification of goal relaxations. Case-Based Reasoning is integrated with the use of multi-attribute utilities to portray tradeoffs and propose novel goal relaxations and compromises. Persuasive arguments are generated and used as a mechanism to dynamically change the agents' utilities so that convergence to an acceptable compromise can be achieved.
Developing Object-based Distributed Systems _cf_loadingtexthtml="";_cf_contextpath="";_cf_ajaxscriptsrc="/CFIDE/scripts/ajax";_cf_jsonprefix='//';_cf_clientid='74298E73002D35EC51D5132770EDC330';Developing Object-based Distributed Systems function settab() { var mytabs = ColdFusion.Layout.getTabLayout('citationdetails'); mytabs.on('tabchange', function(tabpanel,activetab) { document.cookie = 'picked=' + '893512' + ',' + activetab.id; }) }function letemknow(){ ColdFusion.Window.show('letemknow');}function testthis(){alert('test');}function loadalert(){ alert("I am in the load alert"); }function loadalert2(){ alert("I am in the load alert2"); } google.load('visualization', '1', {packages:['orgchart']}); google.setOnLoadCallback(drawChart); function drawChart() { var data = new google.visualization.DataTable(); data.addColumn('string', 'Name'); data.addColumn('string', 'Manager'); data.addColumn('string', 'ToolTip'); data.addRows([ [{v:'0', f:'CCS for this Technical Report
Visual Query Systems for Databases: A Survey Visual query systems (VQSs) are query systems for databases that use visual representations to depict the domain of interest and express related requests. VQSs can be seen as an evolution of query languages adopted into database management systems; they are designed to improve the effectiveness of the human–computer communication. Thus, their most important features are those that determine the nature of the human–computer dialogue. In order to survey and compare existing VQSs used for querying traditional databases, we first introduce a classification based on such features, namely the adopted visual representations and the interaction strategies. We then identify several user types and match the VQS classes against them, in order to understand which kind of system may be suitable for each kind of user. We also report usability experiments which support our claims. Finally, some of the most important open problems in the VQS area are described.
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.122
0.122
0.12
0.042667
0.004
0.001229
0
0
0
0
0
0
0
0
The world?s a stage: a survey on requirements engineering using a real-life case study In this article we present a survey on the area of Requirements Engineering anchored on the analysis of a real life case study, the London Ambulance Service (56). We aim at bringing to context new methods, techniques and tools that should be of help to both reaserchers and practitioners. The case study in question is of special interest in that it is available to the public and deals with a very large system, of which the software system is only a part of. The survey is divided into four topics of interest: viewpoints, social aspects, evolution and non-functional requirements. This division resulted from the work method adopted by the authors. Our main goal is to bridge recent findings in Requirements Engineering research to a real world problem. In this light, we believe this article to be an important educational device.
Scenario inspections Scenarios help practitioners to better understand the requirements of a software system as well as its interface with the environment. However, despite their widespread use both by object-oriented development teams and human–computer interface designers, scenarios are being built in a very ad-hoc way. Departing from the requirements engineering viewpoint, this article shows how inspections help software developers to better manage the production of scenarios. We used Fagan’s inspections as the main paradigm in the design of our proposed process. The process was applied to case studies and data were collected regarding the types of problems as well as the effort to find them.
A Scenario Construction Process use cases should evolve fromconcrete use cases, not the other way round. Extendsassociation let us capture the functional requirements ofa complex system, in the same way we learn about anynew subject: First we understand the basic functions,then we introduce complexity."Gough et al. [28] follow an approach closer to the oneproposed in this article regarding their heuristics:`1. Creation of natural language documents: projectscope documents, customer needs documents, serviceneeds...
Dealing with Change: An Approach Using Non-functional Requirements. Non-functional requirements (or Quality Requirements, NFRs) such as confidentiality, performanceand timeliness are often crucial to a software system. Concerns for such NFRs are oftenthe impetus for change. To systematically support system evolution, this paper adapts the&quot;NFR-Framework&quot; which treats NFRs as goals to be achieved during development. Throughoutthe process, consideration of design alternatives, analysis of tradeoffs and rationalisationof design decisions are all carried out in ...
On Non-Functional Requirements in Software Engineering Essentially a software system's utility is determined by both its functionality and its non-functional characteristics, such as usability, flexibility, performance, interoperability and security. Nonetheless, there has been a lop-sided emphasis in the functionality of the software, even though the functionality is not useful or usable without the necessary non-functional characteristics. In this chapter, we review the state of the art on the treatment of non-functional requirements (hereafter, NFRs), while providing some prospects for future directions.
A Conceptual Framework for Requirements Engineering. A framework for assessing research and practice in requirements engineering is proposed. The framework is used to survey state of the art research contributions and practice. The framework considers a task activity view of requirements, and elaborates different views of requirements engineering (RE) depending on the starting point of a system development. Another perspective is to analyse RE from different conceptions of products and their properties. RE research is examined within this framework and then placed in the context of how it extends current system development methods and systems analysis techniques.
Managing Multiple Requirements Perspectives with Metamodels Stakeholder conflicts can be productive in requirements engineering. A requirements-engineering project should ensure that crucial requirements are captured from at least two perspectives, preferably in a notation of the customer's choosing. Capturing, monitoring, and resolving multiple perspectives is difficult and time-consuming when done by hand. Our experience with ConceptBase, a meta-data-management system, shows that a simple but customizable metamodeling approach, combined with an advanced query facility, produces higher quality requirements documents in less time. Our experience shows that conceptual metamodeling technology can be a valuable complement to informal teamwork methods of business analysis and requirements engineering. In particular, the use of representations and cross-perspective analysis can help identify a wide variety of conflicts and, perhaps more important, monitor them.
Towards Modeling and Reasoning Support for Early-Phase Requirements Engineering Requirements are usually understood as stating what a system is supposed to do, as opposed to how it should do it. However, understanding the organizational context and rationales (the "Whys'') that lead up to systems requirements can be just as important for the ongoing success of the system. Requirements modeling techniques can be used to help deal with the knowledge and reasoning needed in this earlier phase of requirements engineering. However, most existing requirements techniques are intended more for the later phase of requirements engineering, which focuses on completeness, consistency, and automated verification of requirements. In contrast, the early phase aims to model and analyze stakeholder interests and how they might be addressed, or compromised, by various system-and-environment alternatives. This paper argues, therefore, that a different kind of modeling and reasoning support is needed for the early phase. An outline of the i* framework is given as an example of a step in this direction. Meeting scheduling is used as a domain example.
On artificial agents for negotiation in electronic commerce A well-established body of research consistently shows that people involved in multiple-issue negotiations frequently select pareto-inferior agreements that “leave money on the table”. Using an evolutionary computation approach, we show how simple, boundedly rational, artificial adaptive agents can learn to perform similarly to humans at stylized negotiations. Furthermore, there is the promise that these agents can be integrated into practicable electronic commerce systems which would not only leave less money on the table, but would enable new types of transactions to be negotiated cost effectively
Automatic verification of finite-state concurrent systems using temporal logic specifications We give an efficient procedure for verifying that a finite-state concurrent system meets a specification expressed in a (propositional, branching-time) temporal logic. Our algorithm has complexity linear in both the size of the specification and the size of the global state graph for the concurrent system. We also show how this approach can be adapted to handle fairness. We argue that our technique can provide a practical alternative to manual proof construction or use of a mechanical theorem prover for verifying many finite-state concurrent systems. Experimental results show that state machines with several hundred states can be checked in a matter of seconds.
The Gaia Methodology for Agent-Oriented Analysis and Design This article presents Gaia: a methodology for agent-oriented analysis and design. The Gaia methodology is both general, in that it is applicable to a wide range of multi-agent systems, and comprehensive, in that it deals with both the macro-level (societal) and the micro-level (agent) aspects of systems. Gaia is founded on the view of a multi-agent system as a computational organisation consisting of various interacting roles. We illustrate Gaia through a case study (an agent-based business process management system).
Universal Sparse Modeling
Task Structures As A Basis For Modeling Knowledge-Based Systems Recently, there has been an increasing interest in improving the reliability and quality of Al systems. As a result, a number of approaches to knowledge-based systems modeling have been proposed. However, these approaches are limited in formally verifying the intended functionality and behavior of a knowledge-based system. In this article, we proposed a formal treatment to task structures to formally specify and verify knowledge-based systems modeled using these structures. The specification of a knowledge-based system modeled using task structures has two components: a model specification that describes static properties of the system, and a process specification that characterizes dynamic properties of the system. The static properties of a system are described by two models: a model about domain objects (domain model), and a model about the problem-solving states (state model). The dynamic properties of the system are characterized by (1) using the notion of state transition to explicitly describe what the functionality of a task is, and (2) specifying the sequence of tasks and interactions between tasks (i.e., behavior of a system) using task state expressions (TSE). The task structure extended with the proposed formalism not only provides a basis for detailed functional decomposition with procedure abstraction embedded in, but also facilitates the verification of the intended functionality and behavior of a knowledge-based system. (C) 1997 John Wiley gr Sons, Inc.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.044815
0.066667
0.022271
0.011135
0.004541
0.000741
0.000089
0.000053
0.00002
0.000001
0
0
0
0
Model-based, mutation-driven test case generation via heuristic-guided branching search. This work introduces a heuristic-guided branching search algorithm for model-based, mutation-driven test case generation. The algorithm is designed towards the efficient and computationally tractable exploration of discrete, non-deterministic models with huge state spaces. Asynchronous parallel processing is a key feature of the algorithm. The algorithm is inspired by the successful path planning algorithm Rapidly exploring Random Trees (RRT). We adapt RRT in several aspects towards test case generation. Most notably, we introduce parametrized heuristics for start and successor state selection, as well as a mechanism to construct test cases from the data produced during search. We implemented our algorithm in the existing test case generation framework MoMuT. We present an extensive evaluation of our heuristics and parameters based on a diverse set of demanding models obtained in an industrial context. In total we continuously utilized 128 CPU cores on three servers for two weeks to gather the experimental data presented. Using statistical methods we determine which heuristics are performing well on all models. With our new algorithm, we are now able to process models consisting of over 2300 concurrent objects. To our knowledge there is no other mutation driven test case generation tool that is able to process models of this magnitude.
MoMut::UML Model-Based Mutation Testing for UML
Model-based mutation testing via symbolic refinement checking. In model-based mutation testing, a test model is mutated for test case generation. The resulting test cases are able to detect whether the faults in the mutated models have been implemented in the system under test. For this purpose, a conformance check between the original and the mutated model is required. The generated counterexamples serve as basis for the test cases. Unfortunately, conformance checking is a hard problem and requires sophisticated verification techniques. Previous attempts using an explicit conformance checker suffered state space explosion. In this paper, we present several optimisations of a symbolic conformance checker using constraint solving techniques. The tool efficiently checks the refinement between non-deterministic test models. Compared to previous implementations, we could reduce our runtimes by 97%. In a new industrial case study, our optimisations can reduce the runtime from over 6 hours to less than 3 minutes.
Killing strategies for model-based mutation testing. This article presents the techniques and results of a novel model-based test case generation approach that automatically derives test cases from UML state machines. The main contribution of this article is the fully automated fault-based test case generation technique together with two empirical case studies derived from industrial use cases. Also, an in-depth evaluation of different fault-based test case generation strategies on each of the case studies is given and a comparison with plain random testing is conducted. The test case generation methodology supports a wide range of UML constructs and is grounded on the formal semantics of Back's action systems and the well-known input-output conformance relation. Mutation operators are employed on the level of the specification to insert faults and generate test cases that will reveal the faults inserted. The effectiveness of this approach is shown and it is discussed how to gain a more expressive test suite by combining cheap but undirected random test case generation with the more expensive but directed mutation-based technique. Finally, an extensive and critical discussion of the lessons learnt is given as well as a future outlook on the general usefulness and practicability of mutation-based test case generation. Copyright © 2014 John Wiley & Sons, Ltd.
Decentralization of process nets with centralized control The behavior of a net of interconnected, communicating processes is described in terms of the joint actions in which the processes can participate. A distinction is made between centralized and decentralized action systems. In the former, a central agent with complete information about the state of the system controls the execution of the actions; in the latter no such agent is needed. Properties of joint action systems are expressed in temporal logic. Centralized action systems allow for simple description of system behavior. Decentralized (two-process) action systems again can be mechanically compiled into a collection of CSP processes. A method for transforming centralized action systems into decentralized ones is described. The correctness of this method is proved, and its use is illustrated by deriving a process net that distributedly sorts successive lists of integers.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research.
Petri nets: Properties, analysis and applications Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets
Design problem solving: a task analysis I propose a task structure for design by analyzing a general class of methods that I call propose- critique-modify methods. The task structure is constructed by identifying a range of methods for each task. For each method, the knowledge needed and the subtasks that it sets up are iden- tified. This recursive style of analysis provides a framework in which we can understand a number of particular proposals for design prob- lem solving as specific combinations of tasks, methods, and subtasks. Most of the subtasks are not really specific to design as such. The analy- sis shows that there is no one ideal method for design, and good design problem solving is a result of recursively selecting methods based on a number of criteria, including knowledge avail- ability. How the task analysis can help in knowledge acquisition and system design is dis- cussed.
WebWork: METEOR2's Web-Based Workflow Management System. METEOR workflow management systems consist of both (1) design/build-time and (2) run-time/enactment components for implementing workflow applications. An enactment system provides the command, communication and control for the individual tasks in the workflow. Tasks are the run-time instances of intra- or inter-enterprise applications. We are developing three implementations of the METEOR model: WebWork, OrbWork and NeoWork. This paper discusses WebWork, an implementation relying solely on Web technology as the infrastructure for the enactment system. WebWork supports a distributed implementation with participation of multiple Web servers. It also supports automatic code generation of workflow applications from design specifications produced by a comprehensive graphical designer. WebWork has been developed as a complement of its more heavyweight counterparts (OrbWork and NeoWork), with the goal of providing ease of workflow application development, installation, use and maintenance. At the time of this writing, WebWork has been installed by several of the LSDIS Lab's industrial partners for testing, evaluation and building workflow applications.
Knowledge-based and statistical approaches to text retrieval Major research issues in information retrieval are reviewed, and developments in knowledge-based approaches are described. It is argued that although a fair amount of work has been done, the effectiveness of this approach has yet to be demonstrated. It is suggested that statistical techniques and knowledge-based approaches should be viewed as complementary, rather than competitive.<>
The multiway rendezvous The multiway rendezvous is a natural generalization of the rendezvous in which more than two processes may participate. The utility of the multiway rendezvous is illustrated by solutions to a variety of problems. To make their simplicity apparent, these solutions are written using a construct tailor-made to support the multiway rendezvous. The degree of support for multiway rendezvous applications by several well-known languages that support the two-way rendezvous is examined. Since such support for the multiway rendezvous is found to be inadequate, well-integrated extensions to these languages are considered that would help provide such support.
Verifying task-based specifications in conceptual graphs A conceptual model is a model of real world concepts and application domains as perceived by users and developers. It helps developers investigate and represent the semantics of the problem domain, as well as communicate among themselves and with users. In this paper, we propose the use of task-based specifications in conceptual graphs (TBCG) to construct and verify a conceptual model. Task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in the conceptual model; whereas conceptual graphs are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of verification. Verifying a conceptual model is performed on model specifications of a task through constraints satisfaction and relaxation techniques, and on process specifications of the task based on operators and rules of inference inherited in conceptual graphs.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.2
0.066667
0.04
0.04
0.000488
0
0
0
0
0
0
0
0
0
Constructing and Refining Large-Scale Railway Models Represented by Petri Nets A new method for rapid construction of large-scale executable railway models is presented. Computer systems for railway systems suffer from poor integration and lack of explicit understanding of the large amount of static and dynamic information in the railway. In this paper, we give solutions to both problems. It is shown how a component-oriented approach makes it easy to construct and refine basic railway models by effective methods, such that a variety of models with important properties can be maintained within the same framework. Basic railway nets are refined into several new kinds: nets that are safe, permit collision detection, include time, and are sensitive to its surroundings. Since the underlying implementation language is Petri nets, large expressibility is combined with simplicity, and in addition, the analysis of the behavior of railway models comes gently.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Demonic, angelic and unbounded probabilistic choices in sequential programs Probabilistic predicate transformers extend standard predicate transformers by adding probabilistic choice to (transformers for) sequential programs; demonic nondeterminism is retained. For finite state spaces, the basic theory is set out elsewhere [17], together with a presentation of the probabilistic 'healthiness conditions' that generalise the 'positive conjunctivity' of ordinary predicate transformers. Here we expand the earlier results beyond ordinary conjunctive transformers, investigating the structure of the transformer space more generally: as Back and von Wright [1] did for the standard (non-probabilistic) case, we nest deterministic, demonic and demonic/angelic transformers, showing how each subspace can be constructed from the one before. We show also that the results hold for infinite state spaces. In the end we thus find characteristic healthiness conditions for the hierarchies of a system in which deterministic, demonic, probabilistic and angelic choices all coexist.
A Refinement Theory that Supports Reasoning About Knowledge and Time An expressive semantic framework for program refinement that supports both temporal reasoning and reasoning about the knowledge of multiple agents is developed. The refinement calculus owes the cleanliness of its decomposition rules for all programming language constructs and the relative simplicity of its semantic model to a rigid synchrony assumption which requires all agents and the environment to proceed in lockstep. The new features of the calculus are illustrated in a derivation of the two-phase-commit protocol.
Automating refinement checking in probabilistic system design Refinement plays a crucial role in "top-down" styles of verification, such as the refinement calculus, but for probabilistic systems proof of refinement is a particularly challenging task due to the combination of probability and nondeterminism which typically arises in partially-specified systems. Whilst the theory of probabilistic refinement is well-known [18] there are few tools to help with establishing refinements between programs. In this paper we describe a tool which provides partial support during refinement proofs. The tool essentially builds small models of programs using an algebraic rewriting system to extract the overall probabilistic behaviour. We use that behaviour to recast refinement-checking as a linear satisfiability problem, which can then be exported to a linear arithmetic solver. One of the major benefits of this approach is the ability to generate counter examples, alerting the prover to a problem in a proposed refinement. We demonstrate the technique on a small case study based on Schneider et al.'s Tank Monitoring [26].
The Generalised Substitution Language Extended to Probabilistic Programs Let predicate P be converted from Boolean to numeric type by writing h P i , with h falsei being 0 and h truei being 1, so that in a degenerate sense h P i can be regarded as 'the probability that P holds in the current state'. Then add explicit numbers and arithmetic operators, to give a richer language of arithmetic formulae into which predicates are embedded by hi . Abrial's generalised substitution language GSL can be applied to arith- metic rather than Boolean formulae with little extra effort. If we add a new operator p⊕ for probabilistic choice, it then becomes 'pGSL': a smooth extension of GSL that includes random algorithms within its scope.
The shadow knows: refinement of ignorance in sequential programs Separating sequential-program state into “visible” and “hidden” parts facilitates reasoning about knowledge, security and privacy: applications include zero-knowledge protocols, and security contexts with hidden “high-security” state and visible “low-security” state. A rigorous definition of how specifications relate to implementations, as part of that reasoning, must ensure that implementations reveal no more than their specifications: they must, in effect, preserve ignorance. We propose just such a definition –a relation of ignorance-preserving refinement– between specifications and implementations of sequential programs. Its purpose is to enable a development-by-refinement methodology for applications like those above. Since preserving ignorance is an extra obligation, the proposed refinement relation restricts (rather than extends) the usual. We suggest general principles for restriction, and we give specific examples of them. To argue that we do not restrict too much –for “no refinements allowed at all” is trivially ignorance-preserving– we derive The Dining Cryptographers protocol via a program algebra based on the restricted refinement relation. It is also a motivating case study, as it has never before (we believe) been treated refinement-algebraically. In passing, we discuss –and solve– the Refinement Paradox.
Partial correctness for probabilistic demonic programs Recent work in sequential program semantics has produced both an operational (He et al., Sci. Comput. Programming 28(2, 3) (1997) 171-192) and an axiomatic (Morgan et al., ACM Trans. Programming Languages Systems 18(3) (1996) 325-353; Seidel et al., Tech Report PRG-TR-6-96, Programming Research group, February 1996) treatment of total correctness for probabilistic demonic programs, extending Kozen's original work (J. Comput. System Sci. 22 (1981) 328-350; Kozen, Proc. 15th ACM Symp. on Theory of Computing, ACM, New York, 1983) by adding demonic nondeterminism. For practical applications (e.g. combining loop invariants with termination constraints) it is important to retain the traditional distinction between partial and total correctness. Jones (Monograph ECS-LFCS-90-105, Ph.D. Thesis, Edinburgh University, Edinburgh, UK, 1990) defines probabilistic partial correctness for probabilistic, but again not demonic programs. In this paper we combine all the above, giving an operational and axiomatic framework for both partial and total correctness of probabilistic and demonic sequential programs; among other things, that provides the theory to support our earlier---and practical---publication on probabilistic demonic loops (Morgan, in: Jifeng et al. (Eds.), Proc. BCS-FACS Seventh Refinement Workshop, Workshops in Computing, Springer, Berlin, 1996. Copyright 2001 Elsevier Science B.V.
A Single Complete Rule for Data Refinement One module is said to be refined by a second if no program using the second module can detect that it is not using the first; in that case the second module can replace the first in any program. Data refinement transforms the interior pieces of a module — its state and consequentially its operations — in order to refine the module overall.
How to cook a temporal proof system for your pet language An abstract temporal proof system is presented whose program-dependent part has a high-level interface with the programming language actually studied. Given a new language, it is sufficient to deline the interface notions of atomic transitions, justice, and fairness in order to obtain a full temporal proof system for this language. This construction is particularly useful for the analysis of concurrent systems. We illustrate the construction on the shared-variable model and on CSP. The generic proof system is shown to be relatively complete with respect to pure first-order temporal logic.
Mathematics of Program Construction, MPC'95, Kloster Irsee, Germany, July 17-21, 1995, Proceedings
Software requirements: Are they really a problem? Do requirements arise naturally from an obvious need, or do they come about only through diligent effort—and even then contain problems? Data on two very different types of software requirements were analyzed to determine what kinds of problems occur and whether these problems are important. The results are dramatic: software requirements are important, and their problems are surprisingly similar across projects. New software engineering techniques are clearly needed to improve both the development and statement of requirements.
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
Intelligent Clearinghouse: Electronic Marketplace with Computer-mediated Negotiation Supports In this paper, we propose an intelligent clearinghouse system, an electronic marketplace with computer-mediated negotiation supports. Most existing electronic market systems support relatively stable markets: traders are not allowed to revise their bids and offers during the market transaction. The intelligent clearinghouse addresses dynamic markets where buyers and sellers are willing to change their utilities as market conditions evolve. Traders in dynamic markets may suffer a significant loss if they fail to execute transactions promptly. The clearinghouse enables traders to compromise their original utilities to avoid transaction failures. This paper describes the foundation of the clearinghouse system and discusses its trading mechanism, including its order matching method and negotiation support capabilities.
Lossless compression of AVIRIS images. Adaptive DPCM methods using linear prediction are described for the lossless compression of hyperspectral (224-band) images recorded by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The methods have two stages-predictive decorrelation (which produces residuals) and residual encoding. Good predictors are described, whose performance closely approaches limits imposed by sensor noise. It is imperative that these predictors make use of the high spectral correlations between bands. The residuals are encoded using variable-length coding (VLC) methods, and compression is improved by using eight codebooks whose design depends on the sensor's noise characteristics. Rice (1979) coding has also been evaluated; it loses 0.02-0.05 b/pixel compression compared with better VLC methods but is much simpler and faster. Results for compressing ten AVIRIS images are reported.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.078498
0.1
0.1
0.04276
0.033889
0.001717
0.000241
0.000013
0
0
0
0
0
0
Operational specification with joint actions: serializable databases Joint actions are introduced as a language basis for operational specification of reactive systems. Joint action systems are closed systems with no communication primitives. Their nondeterministic execution model is based on multi-party actions without an explicit control flow, and they are amenable for stepwise derivation by superposition. The approach is demonstrated by deriving a specification for serializable databases in simple derivation steps. Two different implementation strategies are imposed on this as further derivations. One of the strategies is two-phase locking, for which a separate implementation is given and proved correct. The other is multiversion timestamp ordering, for which the derivation itself is an implementation.
Modeling of Distributed Real-Time Systems in DisCo In this paper we describe adding of metric real time to joint actions, and to the DisCo specification language and tool that are based on them. No new concepts or constructs are needed: time is represented by variables in objects, and action durations are given by action parameters. Thus, we can reason about real-time properties in the same way as about other properties. The scheduling model is unrestrict- ed in the sense that every logically possible computation gets some scheduling. This is more general than maximal parallelism, and the properties proved under it are less sen- sitive to small changes in timing. Since real time is handled by existing DisCo constructs, the tool with its execution ca- pabilities can be used to simulate and animate also real- time properties of specifications.
Stepwise design of real-time systems The joint action approach to modeling of reactive systems is presented and augmented with real time. This leads to a stepwise design method where temporal logic of actions can be used for formal reasoning, superposition is the key mechanism for transformations, the advantages of closed-system modularity are utilized, logical properties are addressed before real-time properties, and real-time properties are enforced without any specific assumptions on scheduling. As a result, real-time modeling is made possible already at early stages of specification, and increased insensitivity is achieved with respect to properties imposed by implementation environments.
Hazard Analysis in Formal Specification Action systems have proven their worth in the design of safetycritical systems. The approach is based on a firm mathematical foundation within which the reasoning about the correctness and behaviour of the system under development is carried out. Hazard analysis is a vital part of the development of safety-critical systems. The results of the hazard analysis are semantically different from the specification terms of the controlling software. The purpose of this paper is to show how we can incorporate the results of hazard analysis into an action system specification by encoding this information via available composition operators for action systems in order to specify robust and safe controllers.
Towards programming with knowledge expressions Explicit use of knowledge expressions in the design of distributed algorithms is explored. A non-trivial case study is carried through, illustrating the facilities that a design language could have for setting and deleting the knowledge that the processes possess about the global state and about the knowledge of other processes. No implicit capabilities for logical reasoning are assumed. A language basis is used that allows common knowledge not only by an eager protocol but also in the true sense. The observation is made that the distinction between these two kinds of common knowledge can be associated with the level of abstraction: true common knowledge of higher levels of abstraction: true common knowledge of higher levels can be implemented as eager common knowledge on lower levels. A knowledge-motivated abstraction tool is therefore suggested to be useful in supporting stepwise refinement of distributed algorithms.
Hybrid Models with Fairness and Distributed Clocks Explicit clocks provide a well-known possibility to introduce time into non-real-time theories of reactive systems. This technique is applied here to an approach where distributed systems are modeled with temporal logic of actions as the formal basis, and fairness as the basic force that makes events take place. The focus of the paper is on the formalization and practical proof of hybrid properties of the form at every moment of time t, (t) holds for X, where X is a set of objects with distributed clocks, and is a predicate that depends both on the discrete states of x X and on time t. The approach is illustrated by a treatment of two well-known examples from the hybrid system literature.
Fairness and hyperfairness in multi-party interactions In this paper, a new fairness notion is proposed for languages with multi-party interactions as the sole interprocess synchronization and communication primitive. The main advantage of this fairness notion is the elimination of starvation occurring solely due to race conditions (i.e., ordering of independent actions). Also, this is the first fairness notion for such languages which is fully-adequate with respect to the criteria presented in [AFK88]. The paper defines the notion, proves its properties, and presents examples of its usefulness.
A new and efficient implementation of multiprocess synchronization Without Abstract
Towards an Automatic Integration of Statecharts The integration of statecharts is part of an integration methodology for object oriented views. Statecharts are the most important language for the representation of the behaviour of objects and are used in many object oriented modeling techniques, e.g. in UML ([23]). In this paper we focus on the situation where the behaviour of an object type is represented in several statecharts, which have to be integrated into a single statechart. The presented approach allows an automatic integration process but gives the designer possibilities to make own decisions to guide the integration process and to achieve qualitative design goals.
The Object Flow Model: A Formal Framework for Describing the Dynamic Construction, Destruction and Interaction of Complex Objects This research complements active object-oriented database management systems by providing a formal, yet conceptually-natural model for complex object construction and destruction. The Object Flow Model (OFM), introduced in this paper, assumes an object-oriented database for the rich structural description of objects and for the specification of methods to manipulate objects. The OFM contributes a third component, the Object Flow Diagram (OFD), which provides a visual formalism to describe how multiple objects and events can actively invoke processing steps, how objects can become part of progressively more complex objects, and how complex objects can be picked apart. The OFD thus provides an invocation mechanism that is more general than a single message and a processing mechanism that may invoke multiple methods (so long as they apply to either the input or output objects). The development of the OFD was influenced by conceptual modeling languages and discrete event simulation languages and the formal semantics of the OFD is based on work in deductive databases.
An Initial-Algebra Approach to Directed Acyclic Graphs . The initial-algebra approach to modelling datatypes consistsof giving constructors for building larger objects of that type from smallerones, and laws identifying different ways of constructing the same object.The recursive decomposition of objects of the datatype leads directly toa recursive pattern of computation on those objects, which is very helpfulfor both functional and parallel programming.We show how to model a particular kind of directed acyclic graph usingthis...
Model Checking Complete Requirements Specifications Using Abstraction Although model checking has proven remarkably effective in detectingerrors in hardware designs, its success in the analysis of softwarespecifications has been limited. Model checking algorithms forhardware verification commonly use Binary Decision Diagrams (BDDs) to represent predicates involvingthe many Boolean variables commonly found in hardware descriptions.Unfortunately, BDD representations may be less effective for analyzingsoftware specifications, which usually contain not only Booleansbut variables spanning a wide range of data types. Further, softwarespecifications typically have huge, sometimes infinite, state spacesthat cannot be model checked directly using conventional symbolic methods.One promising but largely unexplored approach to model checking softwarespecifications is to apply mathematically sound abstraction methods.Such methods extract a reduced model from the specification, thus makingmodel checking feasible. Currently, users of model checkers routinelyanalyze reduced models but often generate the models in ad hoc ways. Asa result, the reduced models may be incorrect.This paper, an expanded version of (Bharadwaj and Heitmeyer, 1997), describes how one can model check a complete requirementsspecification expressed in the SCR (Software Cost Reduction) tabular notation.Unlike previous approaches which applied model checking to mode transitiontables with Boolean variables, we use model checking to analyze propertiesof a complete SCR specification with variables ranging over many data types.The paper also describes two sound and, under certain conditions, completemethods for producing abstractions from requirements specifications. Theseabstractions are derived from the specification and the property to beanalyzed. Finally, the paper describes how SCR requirements specificationscan be translated into the languages of Spin, an explicit state model checker,and SMV, a symbolic model checker, and presents the results of model checkingtwo sample SCR specifications using our abstraction methods and the twomodel checkers.
JAN - Java animation for program understanding JAN is a system for animated execution of Java programs. Its application area is program understanding rather than debugging. To this end, the animation can be customized, both by annotating the code with visualization directives and by interactively adapting the visual appearance to the user's personal taste. Object diagrams and sequence dia- grams are supported. Scalability is achieved by recogniz- ing object composition: object aggregates are displayed in a nested fashion and mechanisms for collapsing and ex- ploding aggregates are provided. JAN has been applied to itself, producing an animation of its visualization back- end.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.039152
0.038908
0.036144
0.033333
0.019988
0.008555
0.004139
0.000288
0.000024
0.000001
0
0
0
0
Development of intelligent systems and multi-agents systems with amine platform Amine is a Java open source multi-layer platform dedicated to the development of intelligent systems and multi-agents systems. This paper and companion papers [2, 3] provide an overview of Amine platform and illustrate its use in the development of dynamic programming applications, natural language processing applications, multi-agents systems and ontology-based applications.
Report from the Joint W3C/IETF URI Planning Interest Group: Uniform Resource Identifiers (URIs), URLs, and Uniform Resource Names (URNs): Clarifications and Recommendations
Modeling Real Reasoning In this article we set out to develop a mathematical model of real-life human reasoning. The most successful attempt to do this, classical formal logic, achieved its success by restricting attention on formal reasoning within pure mathematics; more precisely, the process of proving theorems in axiomatic systems. Within the framework of mathematical logic, a logical proof consists of a finite sequence σ 1, σ 2, ..., σ n of statements, such that for each i = 1,..., n, σ i is either an assumption for the argument (possibly an axiom), or else follows from one or more of σ 1, ..., σ i − 1 by a rule of logic.
A Pragmatic Understanding of "Knowing That" and "Knowing How": The Pivotal Role of Conceptual Structures What is the difference between knowing that a cake is baked and knowing how to bake a cake? In each, the core concepts are the same, cake and baking, yet there seems to be a significant difference. The classical distinction between knowing that and knowing how points to the pivotal role of conceptual structures in both reasoning about and using knowledge. Peirce's recognition of this pivotal role is most clearly seen in the pragmatic maxim that links theoretical and practical maxims. By extending Peirce's pragmatism with the notion of a general argument pattern, the relation between conceptual structures and these ways of knowing can be understood in terms of the filling instructions for concepts. Since a robust account of conceptual structures must be able to handle both the context of knowing that and knowing how, it would seem reasonable to think that there will be multiple representations for the filling instructions. This in turn suggests that a methodological principle of tolerance between those approaches that stress the theoretical understanding of concepts appropriate to knowing that and those that stress the proceduralist understanding of concepts appropriate to knowing how is desirable.
Specifying multiple-viewed software requirements with conceptual graphs Among all the phases of software development, requirements are particularly difficult to specify and analyze, since requirements for any large software system originate with many different persons. Each person's view of the software requirements may be expressed in a different notation, based on that person's knowledge, experience, and vocabulary. In order to perform a knowledge-based analysis of the requirements in combination, a single knowledge representation must be capable of capturing the information expressible in several existing requirements notations. This paper introduces the notation of conceptual graphs based on semantic networks, that provides a general representation. Four common requirements notations are shown to be expressible using conceptual graphs; with algorithms and examples provided.
Simulation of hepatological models: a study in visual interactive exploration of scientific problems In many different fields of science and technology, visual expressions formed by diagrams, sketches, plots and even images are traditionally used to communicate not only data but also procedures. When these visual expressions are systematically used within a scientific community, bi-dimensional notations often develop which allow the construction of complex messages from sets of primitive icons. This paper discusses how these notations can be translated into visual languages and organized into an interactive environment designed to improve the user's ability to explore scientific problems. To facilitate this translation, the use of Conditional Attributed Rewriting Systems has been extended to visual language definition. The case of a visual language in the programming of a simulation of populations of hepatic cells is studied. A discussion is given of how such a visual language allows the construction of programs through the combination of graphical symbols which are familiar to the physician or which schematize shapes familiar to him in that they resemble structures the observes in real experiments. It is also shown how such a visual approach allows the user to focus on the solution of his problems, avoiding any request for unnecessary precision and most requests for house-keeping data during the interaction.
Formal verification for fault-tolerant architectures: prolegomena to the design of PVS PVS is the most recent in a series of verification systems developed at SRI. Its design was strongly influenced, and later refined, by our experiences in developing formal specifications and mechanically checked verifications for the fault-tolerant architecture, algorithms, and implementations of a model 驴reliable computing platform驴 (RCP) for life-critical digital flight-control applications, and by a collaborative project to formally verify the design of a commercial avionics processor called AAMP5. Several of the formal specifications and verifications performed in support of RCP and AAMP5 are individually of considerable complexity and difficulty. But in order to contribute to the overall goal, it has often been necessary to modify completed verifications to accommodate changed assumptions or requirements, and people other than the original developer have often needed to understand, review, build on, modify, or extract part of an intricate verification. In this paper, we outline the verifications performed, present the lessons learned, and describe some of the design decisions taken in PVS to better support these large, difficult, iterative, and collaborative verifications.
Petri nets: Properties, analysis and applications Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets
Performance evaluation in content-based image retrieval: overview and proposals Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as defining a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented.
WebWork: METEOR2's Web-Based Workflow Management System. METEOR workflow management systems consist of both (1) design/build-time and (2) run-time/enactment components for implementing workflow applications. An enactment system provides the command, communication and control for the individual tasks in the workflow. Tasks are the run-time instances of intra- or inter-enterprise applications. We are developing three implementations of the METEOR model: WebWork, OrbWork and NeoWork. This paper discusses WebWork, an implementation relying solely on Web technology as the infrastructure for the enactment system. WebWork supports a distributed implementation with participation of multiple Web servers. It also supports automatic code generation of workflow applications from design specifications produced by a comprehensive graphical designer. WebWork has been developed as a complement of its more heavyweight counterparts (OrbWork and NeoWork), with the goal of providing ease of workflow application development, installation, use and maintenance. At the time of this writing, WebWork has been installed by several of the LSDIS Lab's industrial partners for testing, evaluation and building workflow applications.
Foundations of 4Thought 4Thought, a prototype design tool, is based on the notion that design artifacts are complex, formal, mathematical objects that require complementary textual and graphical views to be adequately comprehended. This paper describes the combined use of Entity- Relationship modelling and GraphLog to bridge the textual and graphical views. These techniques are illustrated by an example that is formally specified in Z Notation.
S/NET: A High-Speed Interconnect for Multiple Computers This paper describes S/NET (symmetric network), a high-speed small area interconnect that supports effective multiprocessing using message-based communication. This interconnect provides low latency, bounded contention time, and high throughput. It further provides hardware support for low level flow control and signaling. The interconnect is a star network with an active switch. The computers connect to the switch through full duplex fiber links. The S/NET provides a simple memory addressable interface to the processors and appears as a logical bus interconnect. The switch provides fast, fair, and deterministic contention resolution. It further supports high priority signals to be sent unimpeded in presence of data traffic (this can viewed as equivalent to interrupts on a conventional memory bus). The initial implementation supports a mix of VAX computers and Motorola 68000 based single board computers up to a maximum of 12. The switch throughput is 80 Mbits/s and the fiber links operate at a data rate of 10 Mbits/s. The kernel-to-kernel latency is only100 mus. We present a description of the architecture and discuss the performance of current systems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.11
0.12
0.12
0.12
0.005714
0
0
0
0
0
0
0
0
0
Security of medical images for telemedicine: a systematic review Recently, there has been a rapid growth in the utilization of medical images in telemedicine applications. The authors in this paper presented a detailed discussion of different types of medical images and the attacks that may affect medical image transmission. This survey paper summarizes existing medical data security approaches and the different challenges associated with them. An in-depth overview of security techniques, such as cryptography, steganography, and watermarking are introduced with a full survey of recent research. The objective of the paper is to summarize and assess the different algorithms of each approach based on different parameters such as PSNR, MSE, BER, and NC.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A framework for the development of object-oriented distributed systems A framework for the development of distributed object-oriented systems in heterogeneous environments is presented. This model attempts to incorporate concepts of distributed computing technology together with software engineering methods. A tool has been developed to enforce the model by means of a high-level specification language and an automatic code generator. However, it is emphasized that the model may be exploited as a methodologic foundation even outside an automatic generation approach
Rapid prototyping through communicating Petri nets The design and implementation of a tool for the construction of distributed systems are described. This tool is based on a specification model which extends ordinary Petri nets to include functional and structural concepts. Functional extensions give the model specification completeness, whereas structuring extensions support the organization of the system under development into a set of message passing modules. The augmented model is named communicating Petri net (CmPN). After an introduction to communicating Petri nets, an outline of the software lifecycle activities enforced by the tool under development is given. Two different methods for automatic code generation are expounded and compared in terms of both computational run-time overhead and code dimension (in the case of an example comprised of four CmPNs)
Using communicating Petri nets to implement parallel computation in single-tasking operating systems An extended Petri net model, referred to as communicating Petri nets, is used for the operational specification of distributed systems, within a software engineering tool. This specification language naturally involves a parallel computational model, largely fitting the characteristics of parallel systems. Nevertheless, it can be efficiently matched even on single-tasking systems by means of appropriate light-weight scheduling policies. A number of such scheduling policies is devised and assessed both with respect to their impact on the semantics of the run-time code and with respect to the execution overhead
A Case Study of SREM First Page of the Article
A software design method for real-time systems DARTS—a design method for real-time systems—leads to a highly structured modular system with well-defined interfaces and reduced coupling between tasks.
An integrated method for effective behaviour analysis of distributed systems No abstract available.
Structured analysis for requirements definition The next article, by Ross and Schoman, is one of three papers chosen for inclusion in this book that deal with the subject of structured analysis. With its companion papers --- by Teichroew and Hershey [Paper 23] and by DeMarco [Paper 24] --- the paper gives a good idea of the direction that the software field probably will be following for the next several years. The paper addresses the problems of traditional systems analysis, and anybody who has spent any time as a systems analyst in a large EDP organization immediately will understand the problems and weaknesses of "requirements definition" that Ross and Schoman relate --- clearly not the sort of problems upon which academicians like Dijkstra, Wirth, Knuth, and most other authors in this book have focused! To stress the importance of proper requirements definition, Ross and Schoman state that "even the best structured programming code will not help if the programmer has been told to solve the wrong problem, or, worse yet, has been given a correct description, but has not understood it." In their paper, the authors summarize the problems associated with conventional systems analysis, and describe the steps that a "good" analysis approach should include. They advise that the analyst separate his logical, or functional description of the system from the physical form that it eventually will take; this is difficult for many analysts to do, since they assume, a priori, that the physical implementation of the system will consist of a computer. Ross and Schoman also emphasize the need to achieve a consensus among typically disparate parties: the user liaison personnel who interface with the developers, the "professional" systems analyst, and management. Since all of these people have different interests and different viewpoints, it becomes all the more important that they have a common frame of reference --- a common way of modeling the system-to-be. For this need, Ross and Schoman propose their solution" a proprietary package, known as SADT, that was developed by the consulting firm of SofTech for which the authors work. The SADT approach utilizes a top-down, partitioned, graphic model of a system. The model is presented in a logical, or abstract, fashion that allows for eventual implementation as a manual system, a computer system, or a mixture of both. This emphasis on graphic models of a system is distinctly different from that of the Teichroew and Hershey paper. It is distinctly similar to the approach suggested by DeMarco in "Structured Analysis and System Specification," the final paper in this collection. The primary difference between DeMarco and Ross/Schoman is that DeMarco and his colleagues at YOURI]DN inc. prefer circles, or "bubbles," whereas the SofTech group prefers rectangles. Ross and Schoman point out that their graphic modeling approach can be tied in with an "automated documentation" approach of the sort described by Teichroew and Hershey. Indeed, this approach gradually is beginning to be adopted by large EDP organizations; but for installations that can't afford the overhead of a computerized, automated systems analysis package, Ross and Schoman neglect one important aspect of systems modeling. That is the "data dictionary," in which all of the data elements pertinent to the new system are defined in the same logical top-down fashion as the rest of the model There also is a need to formalize mini-specifications, or "mini-specs" as DeMarco calls them; that is, the "business policy" associated with each bottom-level functional process of the system must be described in a manner far more rigorous than currently is being done. A weakness of the Ross/Schoman paper is its lack of detail about problem solutions: More than half the paper is devoted to a description of the problems of conventional analysis, but the SADT package is described in rather sketchy detail. There are additional documents on SADT available from SofTech, but the reader still will be left with the fervent desire that Messrs. Ross and Schoman and their colleagues at SofTech eventually will sit down and put their ideas into a full-scale book.
Reasoning about nonatomic operations A method is presented that permits assertional reasoning about a concurrent program even though the atomicity of the elementary operations is left unspecified. It is based upon a generalization of the dynamic logic operator [α]. The method is illustrated by verifying the mutual exclusion property for a two-process version of the bakery algorithm.
An exploratory contingency model of user participation and MIS use A model is proposed of the relationship between user participation and degree of MIS usage. The model has four dimensions: participation characteristics, system characteristics, system initiator, and the system development environment. Stages of the System Development Life Cycle are considered as a participation characteristics, task complexity as a system characteristics, and top management support and user attitudes as parts of the system development environment. The data are from a cross-sectional survey in Korea, covering 134 users of 77 different information systems in 32 business firms. The results of the analysis support the proposed model in general. Several implications of this for MIS managers are then discussed.
Applications experience with Linda We describe three experiments using C-Linda to write parallel codes. The first involves assessing the similarity of DNA sequences. The results demonstrate Linda's flexibility—Linda solutions are presented that work well at two quite different levels of granularity. The second uses a prime finder to illustrate a class of algorithms that do not (easily) submit to automatic parallelizers, but can be parallelized in straight-forward fashion using C-Linda. The final experiment describes the process lattice model, an “inherently” parallel application that is naturally conceived as multiple interacting processes. Taken together, the experience described here bolsters our claim that Linda can bridge the gap between the growing collection of parallel hardware and users eager to exploit parallelism.This work is supported by the NSF under grants DCR-8601920 and DCR-8657615 and by the ONR under grant N00014-86-K-0310. We are grateful to Argonne National Labs for providing access to a Sequent Symmetry.
Stepwise Refinement of Distributed Systems, Models, Formalisms, Correctness, REX Workshop, Mook, The Netherlands, May 29 - June 2, 1989, Proceedings
A Calculus for Predicative Programming A calculus for developing programs from specifications written as predicates that describe the relationship between the initial and final state is proposed. Such specifications are well known from the specification language Z. All elements of a simple sequential programming notation are defined in terms of predicates. Hence programs form a subset of specifications. In particular, sequential composition is defined by demonic composition, non-deterministic choice by demonic disjunction, and iteration by fixed points. Laws are derived which allow proving equivalence and refinement of specifications and programs by a series of steps. The weakest precondition calculus is also included. The approach is compared to the predicative programming approach of E. Hehner and to other refinement calculi.
Lossless compression of AVIRIS images. Adaptive DPCM methods using linear prediction are described for the lossless compression of hyperspectral (224-band) images recorded by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The methods have two stages-predictive decorrelation (which produces residuals) and residual encoding. Good predictors are described, whose performance closely approaches limits imposed by sensor noise. It is imperative that these predictors make use of the high spectral correlations between bands. The residuals are encoded using variable-length coding (VLC) methods, and compression is improved by using eight codebooks whose design depends on the sensor's noise characteristics. Rice (1979) coding has also been evaluated; it loses 0.02-0.05 b/pixel compression compared with better VLC methods but is much simpler and faster. Results for compressing ten AVIRIS images are reported.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.057208
0.022471
0.020414
0.010484
0.003813
0.002286
0.000157
0.000019
0.000002
0.000001
0
0
0
0
Task-Based Specifications Through Conceptual Graphs Combining conceptual graphs with the task-based specification method to specify software requirements helps capture richer semantics, and integrates requirements specifications tightly and uniformly.Conceptual modeling is an important step toward the construction of user requirements. Requirements engineering is knowledge-intensive and cannot be dealt with using only a few general principles. Therefore, a conceptual model is domain-oriented and should represent the richer semantics of the problem domain. The conceptual model also helps designers communicate among themselves and with users.To capture and represent a conceptual model for the problem domain, we needmechanisms to structure the knowledge of the problem domain at the conceptual level, which has the underlying principles of abstraction and encapsulation; and formalisms to represent the semantics of the problem domain and to provide a reasoning capability for verification and validation.We propose the task-based specification methodology as the mechanism to structure the knowledge captured in conceptual models. TBSM offers four main benefits for constructing conceptual models: First, incorporating the task structure provides a detailed functional-decomposition technique for organizing and refining functional and behavioral specifications.Second, the distinction between soft and rigid conditions lets us specify conflicting functional requirements.Third, with TBSM, not only can we document the expected control flow and module interactions, but we can also verify that the behavioral specification is consistent with the system's functional specification.Fourth, the state model makes it easier to describe complex state conditions. Terminology defined in the state model can easily be reused for specifying the functionality of different tasks. Without such a state model, describing the state conditions before and after a functional unit of an expert system is too cumbersome to be practical.We propose conceptual graphs as the formalism to express task-based specifications where the task structure of problem-solving knowledge drives the specification, the pieces of the specification can be iteratively refined, and verification can be performed for a single layer or between layers. We chose conceptual graphs for their expressive power to represent both declarative and procedural knowledge, and for their assimilation capability--that is, their ability to be combined.
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The role of knowledge in software development Software development is knowledge-intensive. Many concepts have been developed to ease or guide the processing of knowledge in software development, including information hiding, modularity, objects, functions and procedures, patterns, and more. These concepts are supported by various methods, approaches, and tools using symbols, graphics, and languages. Some are formal; others are semiformal or simply made up of key practices. Methods and approaches in software engineering are often based on the results of empirical observations or on individual success stories.
Integrating multiple paradigms within the blackboard framework While early knowledge-based systems suffered the frequent criticism of having little relevance to the real world, an increasing number of current applications deal with complex, real-world problems. Due to the complexity of real-world situations, no one general software technique can produce adequate results in different problem domains, and artificial intelligence usually needs to be integrated with conventional paradigms for efficient solutions. The complexity and diversity of real-world applications have also forced the researchers in the AI field to focus more on the integration of diverse knowledge representation and reasoning techniques for solving challenging, real-world problems. Our development environment, BEST (Blackboard-based Expert Systems Toolkit), is aimed to provide the ability to produce large-scale, evolvable, heterogeneous intelligent systems. BEST incorporates the best of multiple programming paradigms in order to avoid restricting users to a single way of expressing either knowledge or data. It combines rule-based programming, object-oriented programming, logic programming, procedural programming and blackboard modelling in a single architecture for knowledge engineering, so that the user can tailor a style of programming to his application, using any or arbitrary combinations of methods to provide a complete solution. The deep integration of all these techniques yields a toolkit more effective even for a specific single application than any technique in isolation or collections of multiple techniques less fully integrated. Within the basic, knowledge-based programming paradigm, BEST offers a multiparadigm language for representing complex knowledge, including incomplete and uncertain knowledge. Its problem solving facilities include truth maintenance, inheritance over arbitrary relations, temporal and hypothetical reasoning, opportunistic control, automatic partitioning and scheduling, and both blackboard and distributed problem-solving paradigms.
Indexing hypertext documents in context
Intelligent Query Answering by Knowledge Discovery Techniques Knowledge discovery facilitates querying database knowledge and intelligent query answering in database systems. In this paper, we investigate the application of discovered knowledge, concept hierarchies, and knowledge discovery tools for intelligent query answering in database systems. A knowledge-rich data model is constructed to incorporate discovered knowledge and knowledge discovery tools. Queries are classified into data queries and knowledge queries. Both types of queries can be answered directly by simple retrieval or intelligently by analyzing the intent of query and providing generalized, neighborhood or associated information using stored or discovered knowledge. Techniques have been developed for intelligent query answering using discovered knowledge and/or knowledge discovery tools, which includes generalization, data summarization, concept clustering, rule discovery, query rewriting, deduction, lazy evaluation, application of multiple-layered databases, etc. Our study shows that knowledge discovery substantially broadens the spectrum of intelligent query answering and may have deep implications on query answering in data- and knowledge-base systems.
An incremental constraint solver An incremental constraint solver, the DeltaBlue algorithm maintains an evolving solution to the constraint hierarchy as constraints are added and removed. DeltaBlue minimizes the cost of finding a new solution after each change by exploiting its knowledge of the last solution.
Understanding quality in conceptual modeling With the increasing focus on early development as a major factor in determining overall quality, many researchers are trying to define what makes a good conceptual model. However, existing frameworks often do little more than list desirable properties. The authors examine attempts to define quality as it relates to conceptual models and propose their own framework, which includes a systematic approach to identifying quality-improvement goals and the means to achieve them. The framework has two unique features: it distinguishes between goals and means by separating what you are trying to achieve in conceptual modeling from how to achieve it (it has been made so that the goals are more realistic by introducing the notion of feasibility); and it is closely linked to linguistic concepts because modeling is essentially making statements in some language.<>
Facilitating experience reuse among software project managers Organizations have lost billions of dollars due to poor software project implementations. In an effort to enable software project managers to repeat prior successes and avoid previous mistakes, this research seeks to improve the reuse of a specific type of knowledge among software project managers, experiences in the form of narratives. To meet this goal, we identify a set of design principles for facilitating experience reuse based on the knowledge management literature. Guided by these principles we develop a model called Experience Exchange for facilitating the reuse of experiences in the form of narratives. We also provide a proof-of-concept instantiation of a critical component of the Experience Exchange model, the Experience Exchange Library. We evaluate the Experience Exchange model theoretically and empirically. We conduct a theoretical evaluation by ensuring that our model complies with the design principles identified from the literature. We also perform an experiment, using the developed instantiation of the Experience Exchange Library, to evaluate if technology can serve as a medium for transferring experiences across software projects.
Inferring Declarative Requirements Specifications from Operational Scenarios Scenarios are increasingly recognized as an effective means for eliciting, validating, and documenting software requirements. This paper concentrates on the use of scenarios for requirements elicitation and explores the process of inferring formal specifications of goals and requirements from scenario descriptions. Scenarios are considered here as typical examples of system usage; they are provided in terms of sequences of interaction steps between the intended software and its environment. Such scenarios are in general partial, procedural, and leave required properties about the intended system implicit. In the end such properties need to be stated in explicit, declarative terms for consistency/completeness analysis to be carried out.A formal method is proposed for supporting the process of inferring specifications of system goals and requirements inductively from interaction scenarios provided by stakeholders. The method is based on a learning algorithm that takes scenarios as examples/counterexamples and generates a set of goal specifications in temporal logic that covers all positive scenarios while excluding all negative ones.The output language in which goals and requirements are specified is the KAOS goal-based specification language. The paper also discusses how the scenario-based inference of goal specifications is integrated in the KAOS methodology for goal-based requirements engineering. In particular, the benefits of inferring declarative specifications of goals from operational scenarios are demonstrated by examples of formal analysis at the goal level, including conflict analysis, obstacle analysis, the inference of higher-level goals, and the derivation of alternative scenarios that better achieve the underlying goals.
Tools for specifying real-time systems Tools for formally specifying software for real-time systems have strongly improved their capabilities in recent years. At present, tools have the potential for improving software quality as well as engineers' productivity. Many tools have grown out of languages and methodologies proposed in the early 1970s. In this paper, the evolution and the state of the art of tools for real-time software specification is reported, by analyzing their development over the last 20 years. Specification techniques are classified as operational, descriptive or dual if they have both operational and descriptive capabilities. For each technique reviewed three different aspects are analyzed, that is, power of formalism, tool completeness, and low-level characteristics. The analysis is carried out in a comparative manner; a synthetic comparison is presented in the final discussion where the trend of technology improvement is also analyzed.
JPEG 2000 performance evaluation and assessment JPEG 2000, the new ISO/ITU-T standard for still image coding, has recently reached the International Standard (IS) status. Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC. This paper provides a comparison of JPEG 2000 with JPEG-LS and MPEG-4 VTC, in addition to older but widely used solutions, such as JPEG and PNG, and well established algorithms, such as SPIHT. Lossless compression efficiency, fixed and progressive lossy rate-distortion performance, as well as complexity and robustness to transmission errors, are evaluated. Region of Interest coding is also discussed and its behavior evaluated. Finally, the set of provided functionalities of each standard is also evaluated. In addition, the principles behind each algorithm are briefly described. The results show that the choice of the “best” standard depends strongly on the application at hand, but that JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance in most cases.
Enhancing Human Face Detection by Resampling Examples Through Manifolds As a large-scale database of hundreds of thousands of face images collected from the Internet and digital cameras becomes available, how to utilize it to train a well-performed face detector is a quite challenging problem. In this paper, we propose a method to resample a representative training set from a collected large-scale database to train a robust human face detector. First, in a high-dimensional space, we estimate geodesic distances between pairs of face samples/examples inside the collected face set by isometric feature mapping (Isomap) and then subsample the face set. After that, we embed the face set to a low-dimensional manifold space and obtain the low-dimensional embedding. Subsequently, in the embedding, we interweave the face set based on the weights computed by locally linear embedding (LLE). Furthermore, we resample nonfaces by Isomap and LLE likewise. Using the resulting face and nonface samples, we train an AdaBoost-based face detector and run it on a large database to collect false alarms. We then use the false detections to train a one-class support vector machine (SVM). Combining the AdaBoost and one-class SVM-based face detector, we obtain a stronger detector. The experimental results on the MIT + CMU frontal face test set demonstrated that the proposed method significantly outperforms the other state-of-the-art methods.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.01492
0.015921
0.013281
0.013281
0.012276
0.011839
0.006119
0.002533
0.000026
0.000004
0
0
0
0
Toward More Understandable User Interface Specifications Many different methods have been used to specify user interfaces: algebraic specification, grammars, task description languages, transition diagrams with and without extensions, rule-based systems, and by demonstration. However, none of these methods has been widely adopted. Current user interfaces are still built by writing a program, perhaps with the aid of a UIMS. There are two principal reasons for this. First, specification languages are difficult to use. Reading a specification and understanding its exact meaning is very difficult. Writing a correct specification is even more difficult. Second, most specification languages are not executable. This means that after the user interface programmer makes the effort to write a specification, the user interface must still be coded. As a consequence, most programmers have little incentive to do a specification. A pilot study into the comprehensibility of specifications is described. The results of this study suggest that user interface specifications are difficult to interpret manually. A possible solution to this problem, specification animation, is also described.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Strategies for incorporating formal specifications in software development
Building reliable interactive information systems User software engineering (USE) is a methodology, with supporting tools, for the specification, design, and implementation of interactive information systems. With the USE approach, the user interface is formally specified with augmented state transition diagrams, and the operations may be formally specified with preconditions and postconditions. The USE state transition diagrams may be directly executed with the application development tool RAPID/USE. RAPID/USE and its associated tool RAPSUM create and analyze logging information that is useful for system testing, and for evaluation and modification of the user interface. The authors briefly describe the USE transition diagrams and the formal specification approach, and show how these tools and techniques aid in the creation of reliable interactive information systems.
Where Do Operations Come From? A Multiparadigm Specification Technique We propose a technique to help people organize and write complex specifications, exploiting the best features of several different specification languages. Z is supplemented, primarily with automata and grammars, to provide a rigorous and systematic mapping from input stimuli to convenient operations and arguments for the Z specification. Consistency analysis of the resulting specificaiton is based on the structural rules. The technique is illustrated by two examples, a graphical human-computer interface and a telecommunications system.
Capture, integration, and analysis of digital system requirements with conceptual graphs Initial requirements for new digital systems and products that are generally expressed in a variety of notations including diagrams and natural language can be automatically translated to a common knowledge representation for integration, for consistency and completeness analysis, and for further automatic synthesis. In this paper, block diagrams, flowcharts, timing diagrams, and English as used in specifying digital systems requirements are considered as examples of source notations for system requirements. The knowledge representation selected for this work is a form of semantic networks called conceptual graphs. For each source notation, a basis set of semantic primitives in terms of conceptual graphs is given, together with an algorithm for automatically generating conceptual structures from the notation. The automatic generation of conceptual structures from English presumes a restricted sublanguage of English and feedback to the author for verification of the interpretation. Mechanisms for integrating the separate conceptual structures generated from individual requirements expressions using schemata are discussed, and methods are illustrated for consistency and completeness analysis.
Object-oriented modeling and design
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
Formal verification for fault-tolerant architectures: prolegomena to the design of PVS PVS is the most recent in a series of verification systems developed at SRI. Its design was strongly influenced, and later refined, by our experiences in developing formal specifications and mechanically checked verifications for the fault-tolerant architecture, algorithms, and implementations of a model 驴reliable computing platform驴 (RCP) for life-critical digital flight-control applications, and by a collaborative project to formally verify the design of a commercial avionics processor called AAMP5. Several of the formal specifications and verifications performed in support of RCP and AAMP5 are individually of considerable complexity and difficulty. But in order to contribute to the overall goal, it has often been necessary to modify completed verifications to accommodate changed assumptions or requirements, and people other than the original developer have often needed to understand, review, build on, modify, or extract part of an intricate verification. In this paper, we outline the verifications performed, present the lessons learned, and describe some of the design decisions taken in PVS to better support these large, difficult, iterative, and collaborative verifications.
Petri nets: Properties, analysis and applications Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets
Performance evaluation in content-based image retrieval: overview and proposals Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as defining a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented.
Beyond models and metaphors: visual formalisms in user interface design The user interface has both syntactic functions-supplying commands and arguments to programs-and semantic functions-visually presenting application semantics and supporting problem solving cognition. The authors argue that though both functions are important, it is time to devote more resources to the problems of the semantic interface. Complex problem solving activities, e.g. for design and analysis tasks, benefit from clear visualizations of application semantics in the user interface. Designing the semantic interface requires computational building blocks capable of representing and visually presenting application semantics in a clear, precise way. The authors argue that neither mental models not metaphors provide a basis for designing and implementing such building blocks, but that visual formalisms do. They compare the benefits of mental models, metaphors and visual formalisms as the basis for designing the user interface, with particular attention to the practical solutions each provides to application developers
Towards an Automatic Integration of Statecharts The integration of statecharts is part of an integration methodology for object oriented views. Statecharts are the most important language for the representation of the behaviour of objects and are used in many object oriented modeling techniques, e.g. in UML ([23]). In this paper we focus on the situation where the behaviour of an object type is represented in several statecharts, which have to be integrated into a single statechart. The presented approach allows an automatic integration process but gives the designer possibilities to make own decisions to guide the integration process and to achieve qualitative design goals.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.066667
0.016667
0.007407
0.003333
0.000442
0
0
0
0
0
0
0
0
0
Position-Guided Tabu Search Algorithm for the Graph Coloring Problem A very undesirable behavior of any heuristic algorithm is to be stuck in some specific parts of the search space, in particular in the basins of attraction of the local optima. While there are many well-studied methods to help the search process escape a basin of attraction, it seems more difficult to prevent it from looping between a limited number of basins of attraction. We introduce a Position Guided Tabu Search (PGTS) heuristic that, besides avoiding local optima, also avoids re-visiting candidate solutions in previously visited regions. A learning process, based on a metric of the search space, guides the Tabu Search toward yet unexplored regions. The results of PGTS for the graph coloring problem are competitive. It significantly improves the results of the basic Tabu Search for almost all tested difficult instances from the DIMACS Challenge Benchmark and it matches most of the best results from the literature.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Design problem solving: a task analysis I propose a task structure for design by analyzing a general class of methods that I call propose- critique-modify methods. The task structure is constructed by identifying a range of methods for each task. For each method, the knowledge needed and the subtasks that it sets up are iden- tified. This recursive style of analysis provides a framework in which we can understand a number of particular proposals for design prob- lem solving as specific combinations of tasks, methods, and subtasks. Most of the subtasks are not really specific to design as such. The analy- sis shows that there is no one ideal method for design, and good design problem solving is a result of recursively selecting methods based on a number of criteria, including knowledge avail- ability. How the task analysis can help in knowledge acquisition and system design is dis- cussed.
Knowledge Specification of an Expert System It is proposed that knowledge specifications be used as bases for developing and maintaining expert systems. It is suggested that through knowledge acquisition, a knowledge specification representing the kinds of knowledge and reasoning processes used to perform a task can be produced. A prototype can then be built to test and improve the knowledge specification. When a stable and satisfactory specification is obtained, a production system for end users, based on the specification rather than on the prototype, can be implemented. The knowledge specification guides system changes during maintenance. An experimental study to assess and improve this methodology is reported. Prototyping is discussed, an expert system knowledge specification is presented, and a methodology for creating a knowledge specification using conceptual structures is described. The methodology is compared with a currently popular methodology for expert system development. The proposal is primarily intended for medium- to large-scale expert systems, which may have several developers and whose users will not be developing the systems.
The role of knowledge in software development Software development is knowledge-intensive. Many concepts have been developed to ease or guide the processing of knowledge in software development, including information hiding, modularity, objects, functions and procedures, patterns, and more. These concepts are supported by various methods, approaches, and tools using symbols, graphics, and languages. Some are formal; others are semiformal or simply made up of key practices. Methods and approaches in software engineering are often based on the results of empirical observations or on individual success stories.
Integrating multiple paradigms within the blackboard framework While early knowledge-based systems suffered the frequent criticism of having little relevance to the real world, an increasing number of current applications deal with complex, real-world problems. Due to the complexity of real-world situations, no one general software technique can produce adequate results in different problem domains, and artificial intelligence usually needs to be integrated with conventional paradigms for efficient solutions. The complexity and diversity of real-world applications have also forced the researchers in the AI field to focus more on the integration of diverse knowledge representation and reasoning techniques for solving challenging, real-world problems. Our development environment, BEST (Blackboard-based Expert Systems Toolkit), is aimed to provide the ability to produce large-scale, evolvable, heterogeneous intelligent systems. BEST incorporates the best of multiple programming paradigms in order to avoid restricting users to a single way of expressing either knowledge or data. It combines rule-based programming, object-oriented programming, logic programming, procedural programming and blackboard modelling in a single architecture for knowledge engineering, so that the user can tailor a style of programming to his application, using any or arbitrary combinations of methods to provide a complete solution. The deep integration of all these techniques yields a toolkit more effective even for a specific single application than any technique in isolation or collections of multiple techniques less fully integrated. Within the basic, knowledge-based programming paradigm, BEST offers a multiparadigm language for representing complex knowledge, including incomplete and uncertain knowledge. Its problem solving facilities include truth maintenance, inheritance over arbitrary relations, temporal and hypothetical reasoning, opportunistic control, automatic partitioning and scheduling, and both blackboard and distributed problem-solving paradigms.
A Tool For Task-Based Knowledge And Specification Acquisition Knowledge acquisition has been identified as the bottleneck for knowledge engineering. One of the reasons is the lack of an integrated methodology that is able to provide tools and guidelines for the elicitation of knowledge as well as the verification and validation of the system developed. Even though methods that address this issue have been proposed, they only loosely relate knowledge acquisition to the remaining part of the software development fife cycle. To alleviate this problem, we have developed a framework in which knowledge acquisition is integrated with system specifications to facilitate the verification, validation, and testing of the prototypes as well as the final implementation. To support the framework, we have developed a knowledge acquisition tool, TAME. It provides an integrated environment to acquire and generate specifications about the functionality and behavior of the target system, and the representation of the domain knowledge and domain heuristics. The tool and the framework, together, can thus enhance the verification, validation, and the maintenance of expert systems through their life cycles. (C) 1994 John Wiley & Sons, Inc.
Concept acquisition and analysis for requirements specifications
Understanding quality in conceptual modeling With the increasing focus on early development as a major factor in determining overall quality, many researchers are trying to define what makes a good conceptual model. However, existing frameworks often do little more than list desirable properties. The authors examine attempts to define quality as it relates to conceptual models and propose their own framework, which includes a systematic approach to identifying quality-improvement goals and the means to achieve them. The framework has two unique features: it distinguishes between goals and means by separating what you are trying to achieve in conceptual modeling from how to achieve it (it has been made so that the goals are more realistic by introducing the notion of feasibility); and it is closely linked to linguistic concepts because modeling is essentially making statements in some language.<>
Knowledge management and its link to artificial intelligence Knowledge management is an emerging area which is gaining interest by both industry and government. As we move toward building knowledge organizations, knowledge management will play a fundamental role towards the success of transforming individual knowledge into organizational knowledge. One of the key building blocks for developing and advancing this field of knowledge management is artificial intelligence, which many knowledge management practitioners and theorists are overlooking. This paper will discuss the emergence and future of knowledge management, and its link to artificial intelligence.
An indeterminate constructor for applicative programming This paper proposes the encapsulization and control of contending parallel processes within data structures. The advantage of embedding the contention within data is that the contention, itself, thereby becomes an object which can be handled by the program at a level above the actions of the processes themselves. This means that an indeterminate behavior, never precisely specified by the programmer or by the input, may be shared in the same way that an argument to a function is shared by every use of the corresponding parameter, an ability which is of particular importance to applicative-style programming.
Integrating non-interfering versions of programs The need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. To date, the only available tools for assisting with program integration are variants of text-based differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated.
Matching language and hardware for parallel computation in the Linda Machine The Linda Machine is a parallel computer that has been designed to support the Linda parallel programming environment in hardware. Programs in Linda communicate through a logically shared associative memory called tuple space. The goal of the Linda Machine project is to implement Linda's high-level shared-memory abstraction efficiently on a nonshared-memory architecture. The authors describe the machine's special-purpose communication network and its associated protocols, the design of the Linda coprocessor, and the way its interaction with the network supports global access to tuple space. The Linda Machine is in the process of fabrication. The authors discuss the machine's projected performance and compare this to software versions of Linda.
Matrix factorizations for reversible integer mapping Reversible integer mapping is essential for lossless source coding by transformation. A general matrix factorization theory for reversible integer mapping of invertible linear transforms is developed. Concepts of the integer factor and the elementary reversible matrix (ERM) for integer mapping are introduced, and two forms of ERM-triangular ERM (TERM) and single-row ERM (SERM)-are studied. We prove that there exist some approaches to factorize a matrix into TERMs or SERMs if the transform is invertible and in a finite-dimensional space. The advantages of the integer implementations of an invertible linear transform are (i) mapping integers to integers, (ii) perfect reconstruction, and (iii) in-place calculation. We find that besides a possible permutation matrix, the TERM factorization of an N-by-N nonsingular matrix has at most three TERMs, and its SERM factorization has at most N+1 SERMs. The elementary structure of ERM transforms is the ladder structure. An executable factorization algorithm is also presented. Then, the computational complexity is compared, and some optimization approaches are proposed. The error bounds of the integer implementations are estimated as well. Finally, three ERM factorization examples of DFT, DCT, and DWT are given
The software knowledge base We describe a system for maintaining useful information about a software project. The “software knowledge base” keeps track of software components and their properties; these properties are described through binary relations and the constraints that these relations must satisfy. The relations and constraints are entirely user-definable, although a set of predefined libraries of relations with associated constraints is provided for some of the most important aspects of software development (specification, design, implementation, testing, project management).The use of the binary relational model for describing the properties of software is backed by a theoretical study of the relations and constraints which play an important role in software development.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.028997
0.033648
0.025216
0.025216
0.024273
0.016268
0.005022
0.000318
0.000025
0.000001
0
0
0
0
Observer-Based Event-Triggering Consensus Control for Multiagent Systems With Lossy Sensors and Cyber-Attacks. In this paper, the observer-based event-triggering consensus control problem is investigated for a class of discrete-time multiagent systems with lossy sensors and cyber-attacks. A novel distributed observer is proposed to estimate the relative full states and the estimated states are then used in the feedback protocol in order to achieve the overall consensus. An event-triggered mechanism with st...
Auxiliary function-based summation inequalities and their applications to discrete-time systems. Auxiliary function-based summation inequalities are addressed in this technical note. By constructing appropriate auxiliary functions, several new summation inequalities are obtained. A novel sufficient criterion for asymptotic stability of discrete-time systems with time-varying delay is obtained in terms of linear matrix inequalities. The advantage of the proposed method is demonstrated by two classical examples from the literature.
Energy-to-Peak State Estimation for Static Neural Networks With Interval Time-Varying Delays. This paper is concerned with energy-to-peak state estimation on static neural networks (SNNs) with interval time-varying delays. The objective is to design suitable delay-dependent state estimators such that the peak value of the estimation error state can be minimized for all disturbances with bounded energy. Note that the Lyapunov-Krasovskii functional (LKF) method plus proper integral inequalit...
Event-Triggered Generalized Dissipativity Filtering for Neural Networks With Time-Varying Delays This paper is concerned with event-triggered generalized dissipativity filtering for a neural network (NN) with a time-varying delay. The signal transmission from the NN to its filter is completed through a communication channel. It is assumed that the network measurement of the NN is sampled periodically. An event-triggered communication scheme is introduced to design a suitable filter such that precious communication resources can be saved significantly while certain filtering performance can be ensured. On the one hand, the event-triggered communication scheme is devised to select only those sampled signals violating a certain threshold to be transmitted, which directly leads to saving of precious communication resources. On the other hand, the filtering error system is modeled as a time-delay system closely dependent on the parameters of the event-triggered scheme. Based on this model, a suitable filter is designed such that certain filtering performance can be ensured, provided that a set of linear matrix inequalities are satisfied. Furthermore, since a generalized dissipativity performance index is introduced, several kinds of event-triggered filtering issues, such as H∞ filtering, passive filtering, mixed H∞ and passive filtering, (Q,S,R)-dissipative filtering, and L2-L∞ filtering, are solved in a unified framework. Finally, two examples are given to illustrate the effectiveness of the proposed method.
Stability of Recurrent Neural Networks With Time-Varying Delay via Flexible Terminal Method. This brief is concerned with the stability criteria for recurrent neural networks with time-varying delay. First, based on convex combination technique, a delay interval with fixed terminals is changed into the one with flexible terminals, which is called flexible terminal method (FTM). Second, based on the FTM, a novel Lyapunov-Krasovskii functional is constructed, in which the integral interval ...
Stability Analysis for Delayed Neural Networks Considering Both Conservativeness and Complexity. This paper investigates delay-dependent stability for continuous neural networks with a time-varying delay. This paper aims at deriving a new stability criterion, considering tradeoff between conservativeness and calculation complexity. A new Lyapunov-Krasovskii functional with simple augmented terms and delay-dependent terms is constructed, and its derivative is estimated by several techniques, i...
Single/Multiple Integral Inequalities With Applications to Stability Analysis of Time-Delay Systems. This technical note is concerned with the problem of stability analysis for time-delay systems. A new series of integral inequalities to bound a single integral term is presented by introducing some free matrices, which produces tighter bounds than some existing ones. Similarly, based on orthogonal polynomials defined in integral inner spaces, new series of multiple integral inequalities are presented as well, which include the existing double ones. To show the effectiveness of the proposed inequalities, their applications to stability analysis of systems with discrete and distributed delays are provided with numerical examples.
A New Model Transformation of Discrete-Time Systems With Time-Varying Delay and Its Application to Stability Analysis. This technical note focuses on analyzing a new model transformation of uncertain linear discrete-time systems with time-varying delay and applying it to robust stability analysis. The uncertainty is assumed to be norm-bounded and the delay intervally time-varying. A new comparison model is proposed by employing a new approximation for delayed state, and then lifting method and simple Lyapunov-Krasovskii functional method are used to analyze the scaled small gain of this comparison model. This new approximation results in a much smaller error than the existing ones. Based on the scaled small gain theorem, new stability criteria are proposed in terms of linear matrix inequalities. Moreover, it is shown that the obtained conditions can be established through direct Lyapunov method. Two numerical examples are presented to illustrate the effectiveness and superiority of our results over the existing ones.
New approach on robust delay-dependent H∞ control for uncertain T-S fuzzy systems with interval time-varying delay This paper investigates the robust H∞ control for Takagi-Sugeno (T-S) fuzzy systems with interval time-varying delay. By employing a new and tighter integral inequality and constructing an appropriate type of Lyapunov functional, delay-dependent stability criteria are derived for the control problem. Because neither any model transformation nor free weighting matrices are employed in our theoretical derivation, the developed stability criteria significantly improve and simplify the existing stability conditions. Also, the maximum allowable upper delay bound and controller feedback gains can be obtained simultaneously from the developed approach by solving a constrained convex optimization problem. Numerical examples are given to demonstrate the effectiveness of the proposed methods.
Different perspectives on information systems: problems and solutions The paper puts information systems (IS) research dealing with IS problems into perspective. IS problems are surveyed and classified. Using the IS research framework suggested by Ives, Hamilton, and Davis, research into IS problems is classified into several perspectives whose relevance in coping with the problems is discussed. Research perspectives focusing on IS operations environment, IS development process, IS development organization, IS development methods, and IS theories are distinguished. The paper concludes with suggestions for future research and how to deal with IS problems in practice.
On the relation between Memon's and the modified Zeng's palette reordering methods Palette reordering has been shown to be a very effective approach for improving the compression of color-indexed images by general purpose continuous-tone image coding techniques. In this paper, we provide a comparison, both theoretical and experimental, of two of these methods: the pairwise merging heuristic proposed by Memon et al. and the recently proposed modification of Zeng's method. This analysis shows how several parts of the algorithms relate and how their performance is affected by some modifications. Moreover, we show that Memon's method can be viewed as an extension of the modified version of Zeng's technique and, therefore, that the modified Zeng's method can be obtained through some simplifications of Memon's method.
Data refinement by miracles Data refinement is the transformation in a computer program of one data type to another. Usually, we call the original data type ‘abstract’ and the final data type ‘concrete’. The concrete data type is said to represent the abstract. In spite of recent advances, there remain obvious data refinements that are difficult to prove. We give such a refinement and present a new technique that avoids the difficulty. Our innovation is the use of program fragments that do not satisfy Dijkstra's Law of the excluded miracle. These of course can never be implemented, so they must be eliminated before the final program is reached. But, in the intermediate stages of development, they simplify the calculations.
Story-map: iPad companion for long form TV narratives Long form TV narratives present multiple continuing characters and story arcs that last over multiple episodes and even over multiple seasons. Writers increasingly take pride in creating coherent and persistent story worlds with recurring characters and references to backstory. Since viewers may join the story at different points and different levels of commitment, they need support to orient them to the fictional world, to remind them of plot threads, and to allow them to review important story sequences across episodes. Using the affordances of the digital medium we can create navigation patterns and auxiliary information streams to minimize confusion and maximize immersion in the story world. In our application, the iPad is used as a secondary screen to create a character map synchronized with the TV content, and to support navigation of story threads across episodes.
An improved lossless image compression based arithmetic coding using mixture of non-parametric distributions. In this paper, we propose a new approach for a block-based lossless image compression using finite mixture models and adaptive arithmetic coding. Conventional arithmetic encoders encode and decode images sample-by-sample in raster scan order. In addition, conventional arithmetic coding models provide the probability distribution for whole source symbols to be compressed or transmitted, including static and adaptive models. However, in the proposed scheme, an image is divided into non-overlapping blocks and then each block is encoded separately by using arithmetic coding. The proposed model provides a probability distribution for each block which is modeled by a mixture of non-parametric distributions by exploiting the high correlation between neighboring blocks. The Expectation-Maximization algorithm is used to find the maximum likelihood mixture parameters in order to maximize the arithmetic coding compression efficiency. The results of comparative experiments show that we provide significant improvements over the state-of-the-art lossless image compression standards and algorithms. In addition, experimental results show that the proposed compression algorithm beats JPEG-LS by 9.7 % when switching between pixel and prediction error domains.
1.112625
0.052625
0.035083
0.020588
0.00872
0.002669
0.000867
0.0004
0.000008
0
0
0
0
0
Robust Controller Design For Uncertain T-S Fuzzy Systems With Time-Varying Delays This paper analyzes the robust control problems for a class of uncertain Takagi-Sugeno (T-S) fuzzy systems with time varying delays. T-S fuzzy models are employed to represent uncertain delayed nonlinear systems. A Parallel Distributed Compensation (PDC) control law, including both memoryless and delayed state feedback, is considered for stabilization purpose. Based on the choice of a convenient Lyapunov-Krasovskii Functional (LKF) and introducing free weighting matrices, sufficient delay-dependent controller design conditions are derived in terms of linear matrix inequality (LMI). Finally, a numerical example is presented to demonstrate the effectiveness of the proposed approach and the conservatism improvement regarding to previous results.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without...
Improving the performance of Apache Hadoop on pervasive environments through context-aware scheduling. This article proposes to improve Apache Hadoop scheduling through a context-aware approach. Apache Hadoop is the most popular implementation of the MapReduce paradigm for distributed computing, but its design does not adapt automatically to computing nodes’ context and capabilities. By introducing context-awareness into Hadoop, we intent to dynamically adapt its scheduling to the execution environment. This is a necessary feature in the context of pervasive grids, which are heterogeneous, dynamic and shared environments. The solution has been incorporated into Hadoop and assessed through controlled experiments. The experiments demonstrate that context-awareness provides comparative performance gains, especially when some of the resources disappear during execution.
A lightweight decentralized service placement policy for performance optimization in fog computing A decentralized optimization policy for service placement in fog computing is presented. The optimization is addressed to place most popular services as closer to the users as possible. The experimental validation is done in the iFogSim simulator and by comparing our algorithm with the simulator’s built-in policy. The simulation is characterized by modeling a microservice-based application for different experiment sizes. Results showed that our decentralized algorithm places most popular services closer to users, improving network usage and service latency of the most requested applications, at the expense of a latency increment for the less requested services and a greater number of service migrations.
An incremental ant colony optimization based approach to task assignment to processors for multiprocessor scheduling. Optimized task scheduling is one of the most important challenges to achieve high performance in multiprocessor environments such as parallel and distributed systems. Most introduced task-scheduling algorithms are based on the so-called list scheduling technique. The basic idea behind list scheduling is to prepare a sequence of nodes in the form of a list for scheduling by assigning them some priority measurements, and then repeatedly removing the node with the highest priority from the list and allocating it to the processor providing the earliest start time (EST). Therefore, it can be inferred that the makespans obtained are dominated by two major factors: (1) which order of tasks should be selected (sequence subproblem); (2) how the selected order should be assigned to the processors (assignment subproblem). A number of good approaches for overcoming the task sequence dilemma have been proposed in the literature, while the task assignment problem has not been studied much. The results of this study prove that assigning tasks to the processors using the traditional EST method is not optimum; in addition, a novel approach based on the ant colony optimization algorithm is introduced, which can find far better solutions.
Automatic determination of grain size for efficient parallel processing The authors propose a method for automatic determination and scheduling of modules from a sequential program.
Automatic speech recognition- an approach for designing inclusive games Computer games are now a part of our modern culture. However, certain categories of people are excluded from this form of entertainment and social interaction because they are unable to use the interface of the games. The reason for this can be deficits in motor control, vision or hearing. By using automatic speech recognition systems (ASR), voice driven commands can be used to control the game, which can thus open up the possibility for people with motor system difficulty to be included in game communities. This paper aims at find a standard way of using voice commands in games which uses a speech recognition system in the backend, and that can be universally applied for designing inclusive games. Present speech recognition systems however, do not support emotions, attitudes, tones etc. This is a drawback because such expressions can be vital for gaming. Taking multiple types of existing genres of games into account and analyzing their voice command requirements, a general ASRS module is proposed which can work as a common platform for designing inclusive games. A fuzzy logic controller proposed then is to enhance the system. The standard voice driven module can be based on algorithm or fuzzy controller which can be used to design software plug-ins or can be included in microchip. It then can be integrated with the game engines; creating the possibility of voice driven universal access for controlling games.
A novel method for solving the fully neutrosophic linear programming problems The most widely used technique for solving and optimizing a real-life problem is linear programming (LP), due to its simplicity and efficiency. However, in order to handle the impreciseness in the data, the neutrosophic set theory plays a vital role which makes a simulation of the decision-making process of humans by considering all aspects of decision (i.e., agree, not sure and disagree). By keeping the advantages of it, in the present work, we have introduced the neutrosophic LP models where their parameters are represented with a trapezoidal neutrosophic numbers and presented a technique for solving them. The presented approach has been illustrated with some numerical examples and shows their superiority with the state of the art by comparison. Finally, we conclude that proposed approach is simpler, efficient and capable of solving the LP models as compared to other methods.
Secure Medical Data Transmission Model for IoT-Based Healthcare Systems. Due to the significant advancement of the Internet of Things (IoT) in the healthcare sector, the security, and the integrity of the medical data became big challenges for healthcare services applications. This paper proposes a hybrid security model for securing the diagnostic text data in medical images. The proposed model is developed through integrating either 2-D discrete wavelet transform 1 level (2D-DWT-1L) or 2-D discrete wavelet transform 2 level (2D-DWT-2L) steganography technique with a proposed hybrid encryption scheme. The proposed hybrid encryption schema is built using a combination of Advanced Encryption Standard, and Rivest, Shamir, and Adleman algorithms. The proposed model starts by encrypting the secret data; then it hides the result in a cover image using 2D-DWT-1L or 2D-DWT-2L. Both color and gray-scale images are used as cover images to conceal different text sizes. The performance of the proposed system was evaluated based on six statistical parameters; the peak signal-to-noise ratio (PSNR), mean square error (MSE), bit error rate (BER), structural similarity (SSIM), structural content (SC), and correlation. The PSNR values were relatively varied from 50.59 to 57.44 in case of color images and from 50.52 to 56.09 with the gray scale images. The MSE values varied from 0.12 to 0.57 for the color images and from 0.14 to 0.57 for the gray scale images. The BER values were zero for both images, while SSIM, SC, and correlation values were ones for both images. Compared with the state-of-the-art methods, the proposed model proved its ability to hide the confidential patient's data into a transmitted cover image with high imperceptibility, capacity, and minimal deterioration in the received stego-image.
Symbolic Model Checking Symbolic model checking is a powerful formal specification and verification method that has been applied successfully in several industrial designs. Using symbolic model checking techniques it is possible to verify industrial-size finite state systems. State spaces with up to 1030 states can be exhaustively searched in minutes. Models with more than 10120 states have been verified using special techniques.
Strategies for information requirements determination Correct and complete information requirements are key ingredients in planning organizational information systems and in implementing information systems applications. Yet, there has been relatively little research on information requirements determination, and there are relatively few practical, well-formulated procedures for obtaining complete, correct information requirements. Methods for obtaining and documenting information requirements are proposed, but they tend to be presented as general solutions rather than alternative methods for implementing a chosen strategy of requirements determination. This paper identifies two major levels of requirements: the organizational information requirements reflected in a planned portfolio of applications and the detailed information requirements to be implemented in a specific application. The constraints on humans as information processors are described in order to explain why "asking" users for information requirements may not yield a complete, correct set. Various strategies for obtaining information requirements are explained. Examples are given of methods that fit each strategy. A contingency approach is then presented for selecting an information requirements determination strategy. The contingency approach is explained both for defining organizational information requirements and for defining specific, detailed requirements in the development of an application.
A superimposition control construct for distributed systems A control structure called a superimposition is proposed. The structure contains schematic abstractions of processes called roletypes in its declaration. Each roletype may be bound to processes from a basic distributed algorithm, and the operations of the roletype will then execute interleaved with those of the basic processes, over the same state space. This structure captures a kind of modularity natural for distributed programming, which previously has been treated using a macro-like implantation of code. The elements of a superimposition are identified, a syntax is suggested, correctness criteria are defined, and examples are presented.
Behavioral Subtyping, Specification Inheritance, and Modular Reasoning 2006 CR Categories: D. 2.2 [Software Engineering] Design Tools and Techniques, Object-oriented design methods; D. 2.3 [Software Engineering] Coding Tools and Techniques, Object-oriented programming; D. 2.4 [Software Engineering] Software/Program Verification, Class invariants, correctness proofs, formal methods, programming by contract, reliability, tools, Eiffel, JML; D. 2.7 [Software Engineering] Distribution, Maintenance, and Enhancement, Documentation; D. 3.1 [Programming Languages] Formal Definitions and Theory, Semantics; D. 3.2 [Programming Languages] Language Classifications, Object-oriented languages; D. 3.3 [Programming Languages] Language Constructs and Features, classes and objects, inheritance; F. 3.1 [Logics and Meanings of Programs] Specifying and Verifying and Reasoning about Programs, Assertions, invariants, logics of programs, pre-and post-conditions, specification techniques;
Reflection in direct style A reflective language enables us to access, inspect, and/or modify the language semantics from within the same language framework. Although the degree of semantics exposure differs from one language to another, the most powerful approach, referred to as the behavioral reflection, exposes the entire language semantics (or the language interpreter) that defines behavior of user programs for user inspection/modification. In this paper, we deal with the behavioral reflection in the context of a functional language Scheme. In particular, we show how to construct a reflective interpreter where user programs are interpreted by the tower of metacircular interpreters and have the ability to change any parts of the interpreters during execution. Its distinctive feature compared to the previous work is that the metalevel interpreters observed by users are written in direct style. Based on the past attempt of the present author, the current work solves the level-shifting anomaly by defunctionalizing and inspecting the top of the continuation frames. The resulting system enables us to freely go up and down the levels and access/modify the direct-style metalevel interpreter. This is in contrast to the previous system where metalevel interpreters were written in continuation-passing style (CPS) and only CPS functions could be exposed to users for modification.
Hyperspectral image compression based on lapped transform and Tucker decomposition In this paper, we present a hyperspectral image compression system based on the lapped transform and Tucker decomposition (LT-TD). In the proposed method, each band of a hyperspectral image is first decorrelated by a lapped transform. The transformed coefficients of different frequencies are rearranged into three-dimensional (3D) wavelet sub-band structures. The 3D sub-bands are viewed as third-order tensors. Then they are decomposed by Tucker decomposition into a core tensor and three factor matrices. The core tensor preserves most of the energy of the original tensor, and it is encoded using a bit-plane coding algorithm into bit-streams. Comparison experiments have been performed and provided, as well as an analysis regarding the contributing factors for the compression performance, such as the rank of the core tensor and quantization of the factor matrices. HighlightsWe design a hyperspectral image compression using lapped transform and Tucker decomposition.Each band of a hyperspectral image is decorrelated by a lapped transform.Transformed coefficients of various frequencies are rearranged in 3Dwavelet subband structures.3D subbands are viewed as third-order tensors, decomposed by Tucker decomposition.The core tensor is encoded using a bit-plane coding algorithm into bit-streams.
1.101667
0.103333
0.103333
0.051667
0.026667
0.001667
0.000667
0.000056
0
0
0
0
0
0
Automatic verification of finite-state concurrent systems using temporal logic specifications We give an efficient procedure for verifying that a finite-state concurrent system meets a specification expressed in a (propositional, branching-time) temporal logic. Our algorithm has complexity linear in both the size of the specification and the size of the global state graph for the concurrent system. We also show how this approach can be adapted to handle fairness. We argue that our technique can provide a practical alternative to manual proof construction or use of a mechanical theorem prover for verifying many finite-state concurrent systems. Experimental results show that state machines with several hundred states can be checked in a matter of seconds.
Abstract interpretation of reactive systems The advent of ever more complex reactive systems in increasingly critical areas calls for the development of automated verication techniques. Model checking is one such technique, which has proven quite successful. However, the state-explosion problem remains a major stumbling block. Recent experience indicates that solutions are to be found in the application of techniques for property-preserving abstraction and successive approximation of models. Most such applications have so far been based solely on the property-preserving characteristics of simulation relations. A major drawback of all these results is that they do not oer a satisfactory formalization of the notion of precision of abstractions. The theory of Abstract Interpretation oers a framework for the denition and justication of property-preserving abstractions. Furthermore, it provides a method for the eective computation of abstract models directly from the text of a program, thereby avoiding the need for intermediate storage of a full-blown model. Finally, it formalizes the notion of optimality, while allowing to trade precision for speed by computing suboptimal approximations. For a long time, applications of Abstract Interpretation have mainly focused on the analysis of universal safety properties, i.e., properties that hold in all states along every possible execution path. In this article, we extend Abstract Interpretation to the analysis of both existential and universal reactive properties, as expressible in the modal -calculus .I t is shown how abstract models may be constructed by symbolic execution of programs. A notion of approximation between abstract models is dened while conditions are given under which optimal models can be constructed. Examples are given to illustrate this. We indicate conditions under which also falsehood of formulae is preserved. Finally, we compare our approach to those based on simulation relations.
Applying the SCR requirements method to a weapons control panel: an experience report
An integrated method for effective behaviour analysis of distributed systems No abstract available.
GRAIL/KAOS: An Environment for Goal-Driven Requirements Analysis, Integration and Layout The KAOS methodology provides a language, a method, and meta-level knowledge for goal-driven requirements elaboration. The language provides a rich ontology for capturing requirements in terms of goals, constraints, objects, actions, agents etc. Links between requirements are represented its well to capture refinements, conflicts, operationalizations, responsibility assignments, etc. The KAOS specification language is a multi-paradigm language with a two-level structure: an outer semantic net layer for declaring concepts, their attributes and links to other concepts, and an inner formal assertion layer for formally defining the concept. The latter combines a real-time temporal logic for the specification of goals, constraints, and objects, and standard pre-/postconditions for the specification of actions and their strengthening to ensure the constraints
Document ranking and the vector-space model Efficient and effective text retrieval techniques are critical in managing the increasing amount of textual information available in electronic form. Yet text retrieval is a daunting task because it is difficult to extract the semantics of natural language texts. Many problems must be resolved before natural language processing techniques can be effectively applied to a large collection of texts. Most existing text retrieval techniques rely on indexing keywords. Unfortunately, keywords or index terms alone cannot adequately capture the document contents, resulting in poor retrieval performance. Yet keyword indexing is widely used in commercial systems because it is still the most viable way by far to process large amounts of text. Using several simplifications of the vector-space model for text retrieval queries, the authors seek the optimal balance between processing efficiency and retrieval effectiveness as expressed in relevant document rankings
Elements of style: analyzing a software design feature with a counterexample detector We illustrate the application of Nitpick, a specification checker, to the design of a style mechanism for a word processor. The design is cast, along with some expected properties, in a subset of Z. Nitpick checks a property by enumerating all possible cases within some finite bounds, displaying as a counterexample the first case for which the property fails to hold. Unlike animation or execution tools, Nitpick does not require state transitions to be expressed constructively, and unlike theorem provers, operates completely automatically without user intervention. Using a variety of reduction mechanisms, it can cover an enormous number of cases in a reasonable time, so that subtle flaws can be rapidly detected.
Automated assistance for conflict resolution in multiple perspective systems analysis and operation Interest in developing systems from multiple system perspectives, bpth in representational form and in semantic content, is becoming more common place. However, automated support for the tasks involved are relatively scarce. In this paper, we describe our ongoing effort to provide a single networked server interface to a variety of multiple perspective systems analysis tools. We illustrate our tool suite with two types of systems analysis: intra-organizational and inter-organizational. 1. Multiple Perspective System Analysis Researchers from a variety of backgrounds have converged on a single approach to representing development information-multiple perspectives. From the initial elicitation of user requirements to the tracking of multiple program versions, multiple representations have become common place throughout the life-cycle. While each aspect of the lifecycle has different needs, in all cases multiple product perspectives provides common benefits: version control, concurrent development, and increased reuse. Increasingly, researchers recognize the need of not only multiple product versions, but multiple representational forms (e.g., petri nets and state transition diagrams), as well as multiple original sources (e.g., user requirements and management requirements). While multiple perspectives are useful throughout the lifecycle, this paper focuses on the early stages of development, specifically analysis of system requirements. In addition to multiple requirements perspectives, we also want to include user goals and preferences. In fact, our users include all stakeholders-those agents who effect or are affected by the eventual artifact. By considering the origination of design goals, one can assist the negotiation between interacting goals as well as their relaxation against problem constraints. We call this paradigm, multiple perspective systems analysis.
A co-operative scenario based approach to acquisition and validation of system requirements: How exceptions can help! Scenarios, in most situations, are descriptions of required interactions between a desired system and its environment, which detail normative system behaviour. Our studies of current scenario use in requirements engineering have revealed that there is considerable interest in the use of scenarios for acquisition, elaboration and validation of system requirements. However, scenarios have seldom bee...
Real-time constraints in a rapid prototyping language This paper presents real-time constraints of a prototyping language and some mechanisms for handling these constraints in rapidly prototyping embedded systems. Rapid prototyping of embedded systems can be accomplished using a Computer Aided Prototyping System (CAPS) and its associated Prototyping Language (PSDL) to aid the designer in handling hard real-time constraints. The language models time critical operations with maximum execution times, maximum response times and minimum periods. The mechanisms for expressing timing constraints in PSDL are described along with their meanings relative to a series of hardware models which include multi-processor configurations. We also describe a language construct for specifying the policies governing real-time behavior under overload conditions.
Superposition and fairness in reactive system refinement An overview of the refinement calculus and of the action system paradigm for constructing parallel and reactive systems is given. Superposition is studied in detail, as an example of an important method for refinement of reactive programs. In connection with superposition, fairness of action system execution is considered, and a proof rule for preserving fairness in superposition refinement is given
OIL: An Ontology Infrastructure for the Semantic Web Currently, computers are changing from single isolated devices to entry points into a worldwide network of information exchange and business transactions. Support in the exchange of data, information, and knowledge is becoming the key issue in computer technology today. Ontologies provide a shared and common understanding of a domain that can be communicated between people and across application systems. Ontologies will play a major role in supporting information exchange processes in various areas. A prerequisite for such a role is the development of a joint standard for specifying and exchanging ontologies well integrated with existing Web standards. This article deals with precisely this necessity. The authors present OIL, a proposal for such a standard enabling the semantic Web. It is based on existing proposals such as OKBC, XOL, and RDFS and enriches them with necessary features for expressing rich ontologies. The article presents the motivation, underlying rationale, modeling primitives, syntax, semantics, tool environment, and applications of OIL.
Terms with unbounded demonic and angelic nondeterminacy We show how to introduce demonic and angelic nondeterminacy into the term language of each type in typical programming or specification language. For each type we introduce (binary infix) operators @? and @? on terms of the type, corresponding to demonic and angelic nondeterminacy, respectively. We generalise these operators to accommodate unbounded nondeterminacy. We axiomatise the operators and derive their important properties. We show that a suitable model for nondeterminacy is the free completely distributive complete lattice over a poset, and we use this to show that our axiomatisation is sound. In the process, we exhibit a strong relationship between nondeterminacy and free lattices that has not hitherto been evident.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.006443
0.008564
0.007574
0.005818
0.004858
0.004607
0.003947
0.00243
0.001276
0.000304
0.000016
0
0
0
Extended dissipative estimator design for uncertain switched delayed neural networks via a novel triple integral inequality. This paper addresses the problem of extended dissipative estimator design for uncertain switched neural networks (SNNs) with mixed time-varying delays and general activation functions. Firstly, for dealing with triple integral term, a new integral inequality is derived. Secondly, based on the theory of convex combination, we propose a novel flexible delay division method and corresponding modified Lyapunov–Krasovskii functional (LKF) is established. Thirdly, a switching estimator design approach is contributed, which ensures that the resulting augmented system is extended dissipative. Combining the extended reciprocally convex technique with Wirtinger-based integral inequality, improved delay-dependent exponential stability criterion is obtained. Finally, a example with two cases is provided to illustrate the feasibility and effectiveness of the developed theoretical results.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0