Query Text
stringlengths
9
8.71k
Ranking 1
stringlengths
14
5.31k
Ranking 2
stringlengths
11
5.31k
Ranking 3
stringlengths
11
8.42k
Ranking 4
stringlengths
17
8.71k
Ranking 5
stringlengths
14
4.95k
Ranking 6
stringlengths
14
8.42k
Ranking 7
stringlengths
17
8.42k
Ranking 8
stringlengths
10
5.31k
Ranking 9
stringlengths
9
8.42k
Ranking 10
stringlengths
9
8.42k
Ranking 11
stringlengths
10
4.11k
Ranking 12
stringlengths
14
8.33k
Ranking 13
stringlengths
17
3.82k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.1
score_8
float64
0
0.02
score_9
float64
0
0
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
IOT based wearable sensor for diseases prediction and symptom analysis in healthcare sector Humans with good health condition is some more difficult in today's life, because of changing food habit and environment. So we need awareness about the health condition to the survival. The health-support systems faces significant challenges like lack of adequate medical information, preventable errors, data threat, misdiagnosis, and delayed transmission. To overcome this problem, here we proposed wearable sensor which is connected to Internet of things (IoT) based big data i.e. data mining analysis in healthcare. Moreover, here we design Generalize approximate Reasoning base Intelligence Control (GARIC) with regression rules to gather the information about the patient from the IoT. Finally, Train the data to the Artificial intelligence (AI) with the use of deep learning mechanism Boltzmann belief network. Subsequently Regularization _ Genome wide association study (GWAS) is used to predict the diseases. Thus, if the people has affected by some diseases they will get warning by SMS, emails. Etc., after that they got some treatments and advisory from the doctors.
Language Teaching in 3D Virtual Worlds with Machinima: Reflecting on an Online Machinima Teacher Training Course AbstractThis article is based on findings arising from a large, two-year EU project entitled "Creating Machinima to Enhance Online Language Learning and Teaching" CAMELOT, which was the first to investigate the potential of machinima, a form of virtual filmmaking that uses screen captures to record activity in immersive 3D environments, for language teaching. The article examines interaction in two particular phases of the project: facilitator-novice teacher interaction in an online teacher training course which took place in Second Life and teachers' field-testing of machinima which arose from it. Examining qualitative data from interviews and screen recordings following two iterations of a 6-week online teacher training course which was designed to train novice teachers how to produce machinima and the evaluation of the field-testing, the article highlights the pitfalls teachers encountered and reinforces the argument that creating opportunities for pedagogical purposes in virtual worlds implies that teachers need to change their perspectives to take advantage of the affordances offered.
Creating Convivial Affordances: A Study Of Virtual World Social Movements The study of technology and societal challenges is a growing area in information systems research. This paper explores how social movements can use virtual worlds to raise awareness or create safe spaces for their members. As social movements move into virtual worlds, the technical environment becomes more important. This paper presents an interpretive field study using netnographic research and empirical data from a study of a lesbian, gay, bisexual, and transgender social movement in World of Warcraft. This paper takes the position that an understanding of affordances is required for users to be able to create convivial outcomes to shape the use of virtual worlds for their own goals and intentions. The paper presents the concept of convivial affordances, which brings together the theories of affordances and conviviality, and suggests that social users can shape IT artefacts through a creative combination of affordances for their specific goals, and with community involvement.
Virtual Worlds: A New Environment for Constructionist Learning Virtual worlds have the potential to provide a new environment in which to engage learners in constructionist activities. However, they were not designed for education and have features and affordances which are not found in traditional constructionist environments. These may limit the pedagogy in action and/or provide new opportunities with which to transform constructionist pedagogy in practice, but to date there has been no research on these issues. To address this, we explore constructionism in action in the virtual world Second Life. This is the first study to examine the theoretical alignment of pedagogy and technology in practice. An exploratory case study of a purpose-built constructionist learning experience was conducted. The experience was designed based on the theoretical alignment of pedagogy and technology and implemented with 24 postgraduate students over four weeks. Open non-directive interviews, chat logs, constructed artefacts, learners’ written reflections and observations were collected and analyzed using the constant comparative approach. The findings provide insights into how learners engage in meaningful artefact construction, highlight the role of avatars and draw attention to the importance of the designed space. New opportunities for distributed constructionism are identified. We conclude that virtual worlds are effective environments for constructionist learning.
Classification-Based Deep Neural Network Architecture For Collaborative Filtering Recommender Systems This paper proposes a scalable and original classification-based deep neural architecture. Its collaborative filtering approach can be generalized to most of the existing recommender systems, since it just operates on the ratings dataset. The learning process is based on the binary relevant/non-relevant vote and the binary voted/non-voted item information. This data reduction provides a new level of abstraction and it makes possible to design the classification-based architecture. In addition to the original architecture, its prediction process has a novel approach: it does not need to make a large number of predictions to get recommendations. Instead to run forward the neural network for each prediction, our approach runs forward the neural network just once to get a set of probabilities in its categorical output layer. The proposed neural architecture has been tested by using the MovieLens and FilmTrust datasets. A state-of-the-art baseline that outperforms current competitive approaches has been used. Results show a competitive recommendation quality and an interesting quality improvement on large number of recommendations, consistent with the architecture design. The architecture originality makes it possible to address a broad range of future works.
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
A New Implementation Technique for Applicative Languages
I-structures: data structures for parallel computing It is difficult to achieve elegance, efficiency, and parallelism simultaneously in functional programs that manipulate large data structures. We demonstrate this through careful analysis of program examples using three common functional data-structuring approaches-lists using Cons, arrays using Update (both fine-grained operators), and arrays using make-array (a “bulk” operator). We then present I-structure as an alternative and show elegant, efficient, and parallel solutions for the program examples in Id, a language with I-structures. The parallelism in Id is made precise by means of an operational semantics for Id as a parallel reduction system. I-structures make the language nonfunctional, but do not lose determinacy. Finally, we show that even in the context of purely functional languages, I-structures are invaluable for implementing functional data abstractions.
How do program understanding tools affect how programmers understand programs? In this paper, we explore the question of whether program understanding tools enhance or change the way that programmers understand programs. The strategies that programmers use to comprehend programs vary widely. Program understanding tools should enhance or ease the programmer's preferred strategies, rather than impose a fixed strategy that may not always be suitable. We present observations from a user study that compares three tools for browsing program source code and exploring software structures. In this study, 30 participants used these tools to solve several high-level program understanding tasks. These tasks required a broad range of comprehension strategies. We describe how these tools supported or hindered the diverse comprehension strategies used.
Programming Concepts, Methods and Calculi, Proceedings of the IFIP TC2/WG2.1/WG2.2/WG2.3 Working Conference on Programming Concepts, Methods and Calculi (PROCOMET '94) San Miniato, Italy, 6-10 June, 1994
Reasoning about Action Systems using the B-Method The action system formalism has been succesfully used whenconstructing parallel and distributed systems in a stepwise mannerwithin the refinement calculus. Usually the derivation is carried outmanually. In order to be able to produce more trustworthy software,some mechanical tool is needed. In this paper we show how actionsystems can be derived and refined within the B-Toolkit, which is amechanical tool supporting a software development method, theB-Method. We describe how action systems are embedded in theB-Method. Furthermore, we show how a typical and nontrivialrefinement rule, the superposition refinement rule, is formalized andapplied on action systems within the B-Method. In addition toproviding tool support for action system refinement we also extendthe application area of the B-Method to cover parallel anddistributed systems. A derivation towards a distributed loadbalancing algorithm is given as a case study.
Analyzing User Requirements by Use Cases: A Goal-Driven Approach The purpose of requirements engineering is to elicit and evaluate necessary and valuable user needs. Current use-case approaches to requirements acquisition inadequately support use-case formalization and nonfunctional requirements. Based on industry trends and research, the authors have developed a method to structure use-case models with goals. They use a simple meeting planner system to illustrate the benefits of this new approach
A framework for analyzing and testing requirements with actors in conceptual graphs Software has become an integral part of many people's lives, whether knowingly or not. One key to producing quality software in time and within budget is to efficiently elicit consistent requirements. One way to do this is to use conceptual graphs. Requirements inconsistencies, if caught early enough, can prevent one part of a team from creating unnecessary design, code and tests that would be thrown out when the inconsistency was finally found. Testing requirements for consistency early and automatically is a key to a project being within budget. This paper will share an experience with a mature software project that involved translating software requirements specification into a conceptual graph and recommends several actors that could be created to automate a requirements consistency graph.
Algorithmic and enumerative aspects of the Moser-Tardos distribution Moser & Tardos have developed a powerful algorithmic approach (henceforth "MT") to the Lovász Local Lemma (LLL); the basic operation done in MT and its variants is a search for "bad" events in a current configuration. In the initial stage of MT, the variables are set independently. We examine the distributions on these variables which arise during intermediate stages of MT. We show that these configurations have a more or less "random" form, building further on the "MT-distribution" concept of Haeupler et al. in understanding the (intermediate and) output distribution of MT. This has a variety of algorithmic applications; the most important is that bad events can be found relatively quickly, improving upon MT across the complexity spectrum: it makes some polynomial-time algorithms sub-linear (e.g., for Latin transversals, which are of basic combinatorial interest), gives lower-degree polynomial run-times in some settings, transforms certain super-polynomial-time algorithms into polynomial-time ones, and leads to Las Vegas algorithms for some coloring problems for which only Monte Carlo algorithms were known. We show that in certain conditions when the LLL condition is violated, a variant of the MT algorithm can still produce a distribution which avoids most of the bad events. We show in some cases this MT variant can run faster than the original MT algorithm itself, and develop the first-known criterion for the case of the asymmetric LLL. This can be used to find partial Latin transversals -- improving upon earlier bounds of Stein (1975) -- among other applications. We furthermore give applications in enumeration, showing that most applications (where we aim for all or most of the bad events to be avoided) have many more solutions than known before by proving that the MT-distribution has "large" Rényi entropy and hence that its support-size is large.
1.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
Software engineering: an emerging discipline Software engineering is an emerging discipline whose goal is to produce reliable software products in a cost-effective manner. This discipline is evolving rapidly as the challenges faced by its practitioners keep extending their skills. This paper gives a quick tour of the main ideas and thrusts that have driven software engineering in its first 25 years and attempts to look ahead at the next set of advances.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Parallel implementation of lossless clustered integer KLT using OpenMP The Karhunen-Loève Transform (KLT) has been used as a spectral decorrelator to compress multi-component images, such as hyperspectral images in remote sensing. However, the resultant compression is of lossy or near-lossless quality due to a non-integer output. An approximation of KLT that generates an integer output, called the Integer KLT has been introduced, which is based on matrix factorization. Due to the complexity of the original KLT algorithm itself, let alone its approximation (Integer KLT), clustering and tiling techniques had been employed. In this study, the OpenMP environment is used for parallelizing purposes by assigning each cluster to a separate processor thread. The results show that the execution time can be speeded up achieving about half the latency of the non-parallelized code.
3D medical image compression based on multiplierless low-complexity RKLT and shape-adaptive wavelet transform A multiplierless low complexity reversible integer Karhunen-Loe¿ve transform (Low-RKLT) is proposed based on matrix factorization. Conventional methods based on KLT suffer from high computational complexity and unability of applying in lossless medical image compression. To solve the two problems, multiplierless Low-RKLT is investigated using multi-lifting in this paper. Combined with ROI coding method, we have proposed a progressive lossy-to-lossless ROI compression method for three dimensional (3D) medical images with high performance. In our proposed method Low-RKLT is used for the inter-frame decorrelation after SA-DWT in the spatial domain. Simulation results show that, the proposed method performs much better in both lossless and lossy compression than 3D-DWT-based method.
Integer KLT design space exploration for hyperspectral satellite image compression The Integer KLT algorithm is an approximation of the Karhunen-Loève Transform that can be used as a lossless spectral decorrelator. This paper addresses the application of the Integer KLT to lossless compression of hyperspectral satellite imagery. Design space exploration is carried out to investigate the impact of tiling and clustering techniques on the compression ratio and execution time of Integer KLT. AVIRIS hyperspectral images are used as test image data and the spatial compression is carried out with JPEG2000. The results show that clustering canspeed up the execution process and can increase the compression performance.
Clustered Reversible-KLT for Progressive Lossy-to-Lossless 3d Image Coding The RKLT is a lossless approximation to the KLT, and has been recently employed for progressive lossy-to-lossless coding of hyperspectral images. Both yield very good coding performance results, but at a high computational price. In this paper we investigate two RKLT clustering approaches to lessen the computational complexity problem: a normal clustering approach, which still yields good performance; and a multi-level clustering approach, which has almost no quality penalty as compared to the original RKLT. Analysis of rate-distortion evolution and of lossless compression ratio is provided. The proposed approaches supply additional benefits, such as spectral scalability, and a decrease of the side information needed to invert the transform. Furthermore,since with a clustering approach, SERM factorization coefficients are bounded to a finite range, the proposed methods allow coding of large three dimensional images within JPEG2000.
On Overview of KRL, a Knowledge Representation Language
Integrating noninterfering versions of programs The need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. To date, the only available tools for assisting with program integration are variants of text-based differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated.This paper concerns the design of a semantics-based tool for automatically integrating program versions. The main contribution of the paper is an algorithm that takes as input three programs A, B, and Base, where A and B are two variants of Base. Whenever the changes made to Base to create A and B do not “interfere” (in a sense defined in the paper), the algorithm produces a program M that integrates A and B. The algorithm is predicated on the assumption that differences in the behavior of the variant programs from that of Base, rather than differences in the text, are significant and must be preserved in M. Although it is undecidable whether a program modification actually leads to such a difference, it is possible to determine a safe approximation by comparing each of the variants with Base. To determine this information, the integration algorithm employs a program representation that is similar (although not identical) to the dependence graphs that have been used previously in vectorizing and parallelizing compilers. The algorithm also makes use of the notion of a program slice to find just those statements of a program that determine the values of potentially affected variables.The program-integration problem has not been formalized previously. It should be noted, however, that the integration problem examined here is a greatly simplified one; in particular, we assume that expressions contain only scalar variables and constants, and that the only statements used in programs are assignment statements, conditional statements, and while-loops.
Object-oriented development in an industrial environment Object-oriented programming is a promising approach to the industrialization of the software development process. However, it has not yet been incorporated in a development method for large systems. The approaches taken are merely extensions of well-known techniques when 'programming in the small' and do not stand on the firm experience of existing developments methods for large systems. One such technique called block design has been used within the telecommunication industry and relies on a similar paradigm as object-oriented programming. The two techniques together with a third technique, conceptual modeling used for requirement modeling of information systems, have been unified into a method for the development of large systems.
Optimal, efficient, recursive edge detection filters The design of an optimal, efficient, infinite-impulse-response (IIR) edge detection filter is described. J. Canny (1986) approached the problem by formulating three criteria designed in any edge detection filter: good detection, good localization, and low spurious response. He maximized the product of the first two criteria while keeping the spurious response criterion constant. Using the variational approach, he derived a set of finite extent step edge detection filters corresponding to various values of the spurious response criterion, approximating the filters by the first derivative of a Gaussian. A more direct approach is described in this paper. The three criteria are formulated as appropriate for a filter of infinite impulse response, and the calculus of variations is used to optimize the composite criteria. Although the filter derived is also well approximated by first derivative of a Gaussian, a superior recursively implemented approximation is achieved directly. The approximating filter is separable into two linear filters operating in two orthogonal directions allowing for parallel edge detection processing. The implementation is very simple and computationally efficient
Design problem solving: a task analysis I propose a task structure for design by analyzing a general class of methods that I call propose- critique-modify methods. The task structure is constructed by identifying a range of methods for each task. For each method, the knowledge needed and the subtasks that it sets up are iden- tified. This recursive style of analysis provides a framework in which we can understand a number of particular proposals for design prob- lem solving as specific combinations of tasks, methods, and subtasks. Most of the subtasks are not really specific to design as such. The analy- sis shows that there is no one ideal method for design, and good design problem solving is a result of recursively selecting methods based on a number of criteria, including knowledge avail- ability. How the task analysis can help in knowledge acquisition and system design is dis- cussed.
WebWork: METEOR2's Web-Based Workflow Management System. METEOR workflow management systems consist of both (1) design/build-time and (2) run-time/enactment components for implementing workflow applications. An enactment system provides the command, communication and control for the individual tasks in the workflow. Tasks are the run-time instances of intra- or inter-enterprise applications. We are developing three implementations of the METEOR model: WebWork, OrbWork and NeoWork. This paper discusses WebWork, an implementation relying solely on Web technology as the infrastructure for the enactment system. WebWork supports a distributed implementation with participation of multiple Web servers. It also supports automatic code generation of workflow applications from design specifications produced by a comprehensive graphical designer. WebWork has been developed as a complement of its more heavyweight counterparts (OrbWork and NeoWork), with the goal of providing ease of workflow application development, installation, use and maintenance. At the time of this writing, WebWork has been installed by several of the LSDIS Lab's industrial partners for testing, evaluation and building workflow applications.
The Conical Methodology and the evolution of simulation model development Originating with ideas generated in the mid-1970s, the Conical Methodology (CM) is the oldest procedural approach to simulation model development. This evolutionary overview describes the principles underlying the CM, the environment structured according to these principles, and the capabilities for large complex simulation modeling tasks not provided in textbook descriptions. The CM is an object-oriented, hierarchical specification language that iteratively prescribes object attributes in a definitional phase that is topdown, followed by a specification phase that is bottom-up. The intent is to develop successive model representations at various levels of abstraction that can be diagnosed for correctness, completeness, consistency, and other characteristics prior to implementation as an executable program. Related or competitive approaches, throughout the evolutionary period are categorized as emanating from: artificial intelligence, mathematical programming, software engineering, conceptual modeling, systems theory, logic-based theory, or graph theory. Work in each category is briefly described.
Visual Query Systems for Databases: A Survey Visual query systems (VQSs) are query systems for databases that use visual representations to depict the domain of interest and express related requests. VQSs can be seen as an evolution of query languages adopted into database management systems; they are designed to improve the effectiveness of the human–computer communication. Thus, their most important features are those that determine the nature of the human–computer dialogue. In order to survey and compare existing VQSs used for querying traditional databases, we first introduce a classification based on such features, namely the adopted visual representations and the interaction strategies. We then identify several user types and match the VQS classes against them, in order to understand which kind of system may be suitable for each kind of user. We also report usability experiments which support our claims. Finally, some of the most important open problems in the VQS area are described.
Verifying task-based specifications in conceptual graphs A conceptual model is a model of real world concepts and application domains as perceived by users and developers. It helps developers investigate and represent the semantics of the problem domain, as well as communicate among themselves and with users. In this paper, we propose the use of task-based specifications in conceptual graphs (TBCG) to construct and verify a conceptual model. Task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in the conceptual model; whereas conceptual graphs are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of verification. Verifying a conceptual model is performed on model specifications of a task through constraints satisfaction and relaxation techniques, and on process specifications of the task based on operators and rules of inference inherited in conceptual graphs.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.1
0.1
0.018182
0
0
0
0
0
0
0
0
0
0
Deriving Architectural Flexibility Requirements In Safety-Critical Systems Safety-critical embedded systems are constrained by safety regulations that require the designers of the system to explain its operation. This includes the operation of any flexibility mechanisms present in the design, and the rationale for their inclusion. The ability to place such flexibility where it is most needed is a crucial factor in reducing the cost and risk of safety-critical system development. In this paper an analysis technique that the designer can apply when faced with potential requirements problems is described and evaluated. The technique derives flexibility requirements from indicators of customer uncertainty in the way the requirement is expressed. This allows the designer to quickly describe the required flexibility in the architecture and proceed with design even when the requirement is expected to changed The evaluation shows a significant improvement in the ability of a design to manage change when it contains flexibility that is targeted using the uncertainty analysis technique, compared with flexibility that is generated through more conventional means.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Semi-Automated Methodology for Extracting Access Control Rules from the European Data Protection Directive Handling personal data in a legally compliant way is an important factor for ensuring the trustworthiness of a service provider. The EU data protection directive (EU DPD) is built in such a way that the outcomes of rules are subject to explanations, contexts with dependencies, and human interpretation. Therefore, the process of obtaining deterministic and formal rules in policy languages from the EU DPD is difficult to fully automate. To tackle this problem, we demonstrate in this paper the use of a Controlled Natural Language (CNL) to encode the rules of the EU DPD, in a manner that can be automatically converted into the policy languages XACML and PERMIS. We also show that forming machine executable rules automatically from the controlled natural language grammar not only has the benefit of ensuring the correctness of those rules but also has potential of making the overall process more efficient.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Part-Based High Accuracy Recognition of Serial Numbers in Bank Notes This paper proposes a novel part-based character recognition method for a new topic of RMB (renminbi bank note, the paper currency used in China) serial number recognition, which is important for reducing financial crime and improving financial market stability and social security. Given an input sample, we first generate a set of local image parts using the Difference-of-Gaussians (DoG) keypoint detector. Then, all of the local parts are classified by an SVM classifier to provide a confidence vector for each part. Finally, three methods are introduced to combine the recognition results of all parts. Since the serial number samples suffer from complex background, occlusion, and degradation, our part-based method takes advantage of both global and local character structure features, and offers an overall increase in robustness and reliability to the entire recognition system. Experiments conducted on a RMB serial number character database show that the test accuracy boosted from 98.90% to 99.33% by utilizing the proposed method with multiple voting based combination strategy. The part-based recognition method can also be extended to other types of banknotes, such as Euro, U.S. and Canadian dollars, or in character recognition applications with complex backgrounds.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Integrity Checking in a Logic-Oriented ER Model
Query Optimization Techniques Utilizing Path Indexes in Object-Oriented Database Systems We propose query optimization techniquesthat fully utilize the advantages of path indexesin object-oriented database systems. Althoughpath indexes provide an efficient accessto complex objects, little research has beendone on query optimization that fully utilizepath indexes. We first devise a generalizedindex intersection technique, adapted to thestructure of the path index extended fromconventional indexes, for utilizing multiple(path) indexes to access each class in a query.We...
The Notion of ``Classes of a Path'' in ER Schemas In Entity-Relationship (ER) modeling connection traps are a known problems. But the literature does not seem to have provided an adequate treatment of it. Moreover, it seems to be only a special case of a more fundamental problem of whether a piece of information can be represented by a database that is specified by an ER schema. To develop a systematic treatment for this problem, in this paper we suggest adopting a semiotic approach, which enables the separation of topological connections at the syntactic level and semantic connections, and an examination of the inter-relationships between them. Based on this, we propose and describe the notion of 'classes of a path' in an ER schema, and then indicate its implications to ER modeling.
ENIAM: a more complete conceptual schema language
The Object Flow Model: A Formal Framework for Describing the Dynamic Construction, Destruction and Interaction of Complex Objects This research complements active object-oriented database management systems by providing a formal, yet conceptually-natural model for complex object construction and destruction. The Object Flow Model (OFM), introduced in this paper, assumes an object-oriented database for the rich structural description of objects and for the specification of methods to manipulate objects. The OFM contributes a third component, the Object Flow Diagram (OFD), which provides a visual formalism to describe how multiple objects and events can actively invoke processing steps, how objects can become part of progressively more complex objects, and how complex objects can be picked apart. The OFD thus provides an invocation mechanism that is more general than a single message and a processing mechanism that may invoke multiple methods (so long as they apply to either the input or output objects). The development of the OFD was influenced by conceptual modeling languages and discrete event simulation languages and the formal semantics of the OFD is based on work in deductive databases.
A Graph-Based Framework for Multiparadigmatic Visual Access to Databases We describe an approach for multiparadigmatic visual access to databases, which is proposed to achieve seamless integration of different interaction paradigms. The user is provided with an adaptive interface augmented by a user model, supporting different visual representations of both data and queries. The visual representations are characterized on the basis of the chosen visual formalisms, namely forms, diagrams, and icons. To access different databases, a unified data model, the Graph Model, is used as a common underlying formalism to which databases, expressed in the most popular data models, can be mapped. Graph Model databases are queried through the adaptive interface. The semantics of the query operations is formally defined in terms of graphical primitives. Such a formal approach permits us to define the concept of "atomic query," which is the minimal portion of a query that can be transferred from one interaction paradigm to another and processed by the system. Since certain interaction modalities and visual representations are more suitable for certain user classes, the system can suggest to the user the most appropriate interaction modality as well as the visual representation, according to the user model. Some results on user model construction are presented.
Proving Liveness Properties of Concurrent Programs
Foundations of 4Thought 4Thought, a prototype design tool, is based on the notion that design artifacts are complex, formal, mathematical objects that require complementary textual and graphical views to be adequately comprehended. This paper describes the combined use of Entity- Relationship modelling and GraphLog to bridge the textual and graphical views. These techniques are illustrated by an example that is formally specified in Z Notation.
Developing an Information System Using TROLL: An Application Field Study In this paper we present a national project located in thearea of computer aided testing and certifying (CATC) of physical devices.
Domain-Specific Automatic Programming Domain knowledge is crucial to an automatic programming system and the interaction between domain knowledge and programming at the current time. The NIX project at Schlumberger-Doll Research has been investigating this issue in the context of two application domains related to oil well logging. Based on these experiments we have developed a framework for domain-specific automatic programming. Within the framework, programming is modeled in terms of two activities, formalization and implementation, each of which transforms descriptions of the program as it proceeds through intermediate states of development. The activities and transformations may be used to characterize the interaction of programming knowledge and domain knowledge in an automatic programming system.
Extending the Entity-Relationship Approach for Dynamic Modeling Purposes
Logarithmical hopping encoding: a low computational complexity algorithm for image compression LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber-Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O(n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Report from the Joint W3C/IETF URI Planning Interest Group: Uniform Resource Identifiers (URIs), URLs, and Uniform Resource Names (URNs): Clarifications and Recommendations
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.200091
0.200091
0.200091
0.200091
0.100069
0.050055
0.005889
0.000078
0.00004
0.000009
0.000001
0
0
0
ScenIC: A Strategy for Inquiry-Driven Requirements Determination ScenIC is a requirements engineering method for evolving systems. Derived from the Inquiry Cycle model of requirements refinement, it uses goal refinement and scenario analysis as its primary methodological strategies. ScenIC rests on an analogy with human memory: semantic memory consists of generalizations about system properties; episodic memory consists of specific episodes and scenarios; and working memory consists of reminders about incomplete refinements. Method-specific reminders and resolution guidelines are activated by the state of episodic or semantic memory. The paper presents a summary of the ScenIC strategy and guidelines.
The Three Dimensions of Requirements Engineering Requirements engineering (RE) is perceived as an area of growing im- portance. Due to the increasing effort spent for research in this area many con- tributions to solve different problems within RE exist. The purpose of this paper is to identify the main goals to be reached during the requirements engineering process in order to develop a framework for RE. This framework consists of the three dimensions:
Merging individual conceptual models of requirements While it is acknowledged that system requirements will never be complete, incompleteness is often due to an inadequate process and methods for acquiring and tracking a representative set of requirements. Viewpoint development has been proposed to address these problems. We offer a viewpoint development approach that fits easily into the current practice of capturing requirements as use case descriptions. However, current practice does not support visualization of use case descriptions, the capture of multiple use case descriptions, the modeling of conflicts and the reconciliation of viewpoints. In our approach we apply techniques from natural language processing, term subsumption and set-theory to automatically convert the use case descriptions into a line diagram. The visualisation of use case descriptions is a natural addition to the object-oriented design of systems using the Unified Modelling Language where diagrams act as communication and validation devices. RECOCASE is a comprehensive methodology that includes use case description guidelines, a controlled language to support natural language translation, a requirements engineering process model and a tool to assist the specification and reconciliation of requirements. Our approach combines group and individual processes to minimise contradictions and missing information and maximise ownership of the requirements models. In this paper we describe each of the parts of our methodology following an example through each section.
Handling Obstacles in Goal-Oriented Requirements Engineering Requirements engineering is concerned with the elicitation of high-level goals to be achieved by the envisioned system, the refinement of such goals and their operationalization into specifications of services and constraints and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. Requirements engineering processes often result in goals, requirements, and assumptions about agent behavior that are too ideal; some of them are likely not to be satisfied from time to time in the running system due to unexpected agent behavior. The lack of anticipation of exceptional behaviors results in unrealistic, unachievable, and/or incomplete requirements. As a consequence, the software developed from those requirements will not be robust enough and will inevitably result in poor performance or failures, sometimes with critical consequences on the environment. This paper presents formal techniques for reasoning about obstacles to the satisfaction of goals, requirements, and assumptions elaborated in the requirements engineering process. A first set of techniques allows obstacles to be generated systematically from goal formulations and domain properties. A second set of techniques allows resolutions to be generated once the obstacles have been identified thereby. Our techniques are based on a temporal logic formalization of goals and domain properties; they are integrated into an existing method for goal-oriented requirements elaboration with the aim of deriving more realistic, complete, and robust requirements specifications. A key principle in this paper is to handle exceptions at requirements engineering time and at the goal level, so that more freedom is left for resolving them in a satisfactory way. The various techniques proposed are illustrated and assessed in the context of a real safety-critical system.
Inquiry-Based Requirements Analysis This approach emphasizes pinpointing where and when information needs occur; at its core is the inquiry cycle model, a structure for describing and supporting discussions about system requirements. The authors use a case study to describe the model's conversation metaphor, which follows analysis activities from requirements elicitation and documentation through refinement.
Agent-based support for communication between developers and users in software design Research in knowledge-based software engineering has led to advances in the ability to specify and automatically generate software. Advances in the support of upstream activities have focussed on assisting software developers. We examine the possibility of extending computer-based support in the software development process to allow end users to participate, providing feedback directly to devel- opers. The approach uses the notion of "agents" devel- oped in artificial intelligence research and concepts of participatory design. Namely, agents monitor end users working with prototype systems and report mismatches between developers' expectations and a system's actual usage. At the same time, the agents provide end users with an opportunity to communicate with developers, either synchronously or asynchronously. The use of agents is based on actual software development experiences.
An Interval Logic for Real-Time System Specification Formal techniques for the specification of real-time systems must be capable of describing system behavior as a set of relationships expressing the temporal constraints among events and actions, including properties of invariance, precedence, periodicity, liveness, and safety conditions. This paper describes a Temporal-Interval Logic with Compositional Operators (TILCO) designed expressly for the specification of real-time systems. TILCO is a generalization of classical temporal logics based on the operators eventually and henceforth; it allows both qualitative and quantitative specification of time relationships. TILCO is based on time intervals and can concisely express temporal constraints with time bounds, such as those needed to specify real-time systems. This approach can be used to verify the completeness and consistency of specifications, as well as to validate system behavior against its requirements and general properties. TILCO has been formalized by using the theorem prover Isabelle/HOL. TILCO specifications satisfying certain properties are executable by using a modified version of the Tableaux algorithm. This paper defines TILCO and its axiomatization, highlights the tools available for proving properties of specifications and for their execution, and provides an example of system specification and validation.
Tolerant planning and negotiation in generating coordinated movement plans in an automated factory Plan robustness is important for real world applications where modelling imperfections often result in execution deviations. The concept of tolerant planning is suggested as one of the ways to build robust plans. Tolerant planning achieves this aim by being tolerant of an agent's own execution deviations. When applied to multi-agent domains, it has the additional characteristic of being tolerant of other agents' deviant behaviour. Tolerant planning thus defers dynamic replanning until execution errors become excessive. The underlying strategy is to provide more than ample resources for agents to achieve their goals. Such redundancies aggravate the resource contention problem. To counter this, the iterative negotiation mechanism is suggested. It requires agents to be skillful in negotiating with other agents to resolve conflicts in such a way as to minimize compromising one's own tolerances and yet being benevolent in helping others find a feasible plan.
Theories underlying requirements engineering: an overview of NATURE at Genesis NATURE is a collaborative basic research project on theories underlying requirements engineering funded by the ESPRIT III program of the European communities. Its goals are to develop a theory of knowledge representation that embraces subject, usage and development worlds surrounding the system, including expressive freedoms; a theory of domain engineering that facilitates the identification, acquisition and formalization of domain knowledge as well as similarity-based matching and classifying of software engineering knowledge; and a process engineering theory that promotes context and decision-based control of the development process. These theories are integrated and evaluated in a prototype environment constructed around an extended version of the conceptual modeling language Telos
Implementing specification freedoms The process of converting formal specifications into valid implementations is central in the development of reliable software. As formal specification languages are enriched with constructs to enhance their expressive capabilities and as they increasingly afford specificational freedoms by requiring only a description of intended behavior rather than a prescription of particular algorithms, the gap between specification and implementation widens so that converting specifications into implementations becomes even more difficult. A major problem lies in the mapping of high-level specification constructs into an implementation that effects the desired behavior. In this paper, we consider the issues involved in eliminating occurrences of high-level specification-oriented constructs during this process. Mapping issues are discussed in the context of our development methodology, in which implementations are derived via the application of correctness-preserving transformations applied to a specification language whose high-level expressive capabilities are modeled after natural language. After the general discussion, we demonstrate the techniques on a real system whose specification is written in this language.
The CG Formalism as an Ontolingua for Web-Oriented Representation Languages The semantic Web entails the standardization of representation mechanisms so that the knowledge contained in a Web document can be retrieved and processed on a semantic level. RDF seems to be the emerging encoding scheme for that purpose. However, there are many different sorts of documents on theWeb that do not use RDF as their primary coding scheme. It is expected that many one-to-one mappings between pairs of document representation formalisms will eventually arise. This would create a situation where a young standard such as RDF would generate update problems for all these mappings as it evolves, which is inevitable. Rather, we advocate the use of a common Ontolingua for all these encoding formalisms. Though there may be many knowledge representation formalisms suited for that task, we advocate the use of the conceptual graph formalism.
Learning word vectors for sentiment analysis Unsupervised vector-based approaches to semantics can model rich lexical meanings, but they largely fail to capture sentiment information that is central to many word meanings and important for a wide range of NLP tasks. We present a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term--document information as well as rich sentiment content. The proposed model can leverage both continuous and multi-dimensional sentiment information as well as non-sentiment annotations. We instantiate the model to utilize the document-level sentiment polarity annotations present in many online documents (e.g. star ratings). We evaluate the model using small, widely used sentiment and subjectivity corpora and find it out-performs several previously introduced methods for sentiment classification. We also introduce a large dataset of movie reviews to serve as a more robust benchmark for work in this area.
Feature based classification of computer graphics and real images Photorealistic images can now be created using advanced techniques in computer graphics (CG). Synthesized elements could easily be mistaken for photographic (real) images. Therefore we need to differentiate between CG and real images. In our work, we propose and develop a new framework based on an aggregate of existing features. Our framework has a classification accuracy of 90% when tested on the de facto standard Columbia dataset, which is 4% better than the best results obtained by other prominent methods in this area. We further show that using feature selection it is possible to reduce the feature dimension of our framework from 557 to 80 without a significant loss in performance (≪ 1%). We also investigate different approaches that attackers can use to fool the classification system, including creation of hybrid images and histogram manipulations. We then propose and develop filters to effectively detect such attacks, thereby limiting the effect of such attacks to our classification system.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.088458
0.037867
0.020464
0.012022
0.00731
0.003252
0.000819
0.000488
0.000218
0.000056
0
0
0
0
Integrating and customizing heterogeneous e-commerce applications A broad spectrum of electronic commerce applications is currently available on the Web, providing services in almost any area one can think of. As the number and variety of such applications grow, more business opportunities emerge for providing new services based on the integration and customization of existing applications. (Web shopping malls and support for comparative shopping are just a couple of examples.) Unfortunately, the diversity of applications in each specific domain and the disparity of interfaces, application flows, actor roles in the business transaction, and data formats, renders the integration and manipulation of applications a rather difficult task. In this paper we present the Application Manifold system, aimed at simplifying the intricate task of integration and customization of e-commerce applications. The scope of the work in this paper is limited to web-enabled e-commerce applications. We do not support the integration/customization of proprietary/legacy applications. The wrapping of such applications as web services is complementary to our work. Based on the emerging Web data standard, XML, and application modeling standard, UML, the system offers a novel declarative specification language for describing the integration/customization task, supporting a modular approach where new applications can be added and integrated at will with minimal effort. Then, acting as an application generator, the system generates a full integrated/customized e-commerce application, with the declarativity of the specification allowing for the optimization and verification of the generated application. The integration here deals with the full profile of the given e-commerce applications: the various services offered by the applications, the activities and roles of the different actors participating in the application (e.g., customers, vendors), the application flow, as well as with the data involved in the process. This is in contrast to previous works on Web data integration that focused primarily on querying the data available in the applications, mostly ignoring the additional aspects mentioned above.
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
On visual formalisms The higraph, a general kind of diagramming object, forms a visual formalism of topological nature. Higraphs are suited for a wide array of applications to databases, knowledge representation, and, most notably, the behavioral specification of complex concurrent systems using the higraph-based language of statecharts.
Towards an Automatic Integration of Statecharts The integration of statecharts is part of an integration methodology for object oriented views. Statecharts are the most important language for the representation of the behaviour of objects and are used in many object oriented modeling techniques, e.g. in UML ([23]). In this paper we focus on the situation where the behaviour of an object type is represented in several statecharts, which have to be integrated into a single statechart. The presented approach allows an automatic integration process but gives the designer possibilities to make own decisions to guide the integration process and to achieve qualitative design goals.
A Graphical Query Language Based on an Extended E-R Model
Levelled Entity Relationship Model The Entity-Relationship formalism, introduced in the mid-seventies,is an extensively used tool for database design. The database communityis now involved in building the next generation of databasesystems. However, there is no effective formalism similar to ER formodeling the complex data in these systems. We propose the LeveledEntity Relationship (LER) formalism as a step towards fulfilling sucha need.An essential characteristic of these next-generation systems is thata data element is ...
Deductive database support for data visualization We argue that we can use deductive databases to support data visualization. In particular we show how we have used the deductive languages LDL and CORAL for the implementation of the visual query language GraphLog. We discuss in detail the translation function from GraphLog to each of LDL and CORAL, especially when aggregate functions are present. We also present an example of using GraphLog and its environment Hy+ in order to support software design understanding and software design verification.
An Operational Approach to Requirements Specification for Embedded Systems The approach to requirements specification for embedded systems described in this paper is called "operational" because a requirements specification is an executable model of the proposed system interacting with its environment. The approach is embodied by the language PAISLey, which is motivated and defined herein. Embedded systems are characterized by asynchronous parallelism, even at the requirements level; PAISLey specifications are constructed by interacting processes so that this can be represented directly. Embedded systems are also characterized by urgent performance requirements, and PAISLey offers a formal, but intuitive, treatment of performance.
Algebraic tools for the performance evaluation of discrete event systems In this paper, it is shown that a certain class of Petri nets called event graphs can be represented as linear "time-invariant" flnite-dimensional sys- tems using some particular algebras. This sets the ground on which a theory of these systems can be developped in a manner which is very analogous to that of conventional linear system theory. Part 2 of the paper is devoted to showing some preliminary basic developments in that direction. Indeed, there are several ways in which one can consider event graphs as linear sys- tems: these ways correspond to approaches in the time domain, in the event domain and in a two-dimensional domain. In each of these approaches, a difierent algebra has to be used for models to remain linear. However, the common feature of these algebras is that they all fall into the axiomatic deflnition of "dioids". Therefore, Part 1 of the paper is devoted to a unifled presentation of basic algebraic results on dioids.
An Effective Implementation for the Generalized Input-Output Construct of CSP
Using Abstraction and Model Checking to Detect Safety Violations in Requirements Specifications Exposing inconsistencies can uncover many defects in software specifications. One approach to exposing inconsistencies analyzes two redundant specifications, one operational and the other property-based, and reports discrepancies. This paper describes a "practical" formal method, based on this approach and the SCR (Software Cost Reduction) tabular notation, that can expose inconsistencies in software requirements specifications. Because users of the method do not need advanced mathematical training or theorem proving skills, most software developers should be able to apply the method without extraordinary effort. This paper also describes an application of the method which exposed a safety violation in the contractor-produced software requirements specification of a sizable, safety-critical control system. Because the enormous state space of specifications of practical software usually renders direct analysis impractical, a common approach is to apply abstraction to the specification. To reduce the state space of the control system specification, two "pushbutton" abstraction methods were applied, one which automatically removes irrelevant variables and a second which replaces the large, possibly infinite, type sets of certain variables with smaller type sets. Analyzing the reduced specification with the model checker Spin uncovered a possible safety violation. Simulation demonstrated that the safety violation was not spurious but an actual defect in the original specification.
Architecture and applications of the Hy+ visualization system The Hy+ system is a generic visualization tool that supports a novel visual query language called GraphLog. In Hy+, visualizations are based on a graphical formalism that allows comprehensible representations of databases, queries, and query answers to be interactively manipulated. This paper describes the design, architecture, and features of Hy+ with a number of applications in software engineering and network management.
Notes on Nonrepetitive Graph Colouring. A vertex colouring of a graph is nonrepetitive on paths if there is no path upsilon(1), upsilon(2),...., upsilon(2t) such that upsilon(i) and upsilon(t+i) receive the same colour for all i = 1, 2,..., t. We determine the maximum density of a graph that admits a k-colouring that is nonrepetitive on paths. We prove that every graph has a subdivision that admits a 4-colouring that is nonrepetitive on paths. The best previous bound was 5. We also study colourings that are nonrepetitive on walks, and provide a conjecture that would imply that every graph with maximum degree Delta has a f (Delta)-colouring that is nonrepetitive on walks. We prove that every graph with treewidth k and maximum degree Delta has a O(k Delta)-colouring that is nonrepetitive on paths, and a O(k Delta(3))-colouring that is nonrepetitive on walks.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2496
0.002022
0.000405
0.000097
0.000081
0.00004
0.000012
0
0
0
0
0
0
0
Synchrony Loosening Transformations for Interacting Processes has been applied to several subjects. For example: Data Refinement: Replacing abstract data by more efiieient concrete representation e.g. [23], [5], Action Refinement: Replacing a complex action by a combination of simpler actions (a common refinement), Atomiciiy Refinement: [6]. While the theory of stepwise,refinement,(by correctness preserving transformations) has reached,a rather elaborate state by now (e.g., [4], which contains further references), very little, if anything, has been said about the refinement of synchronous, multiparty actions. Such actions occur only in the context of distributed programs and have been found to be a very useful design tool, as explained below. Synchrony loosening transformations,constitute a major,design tool used in Interading Processes (IP) [21]. The core of IP together with its operational semantics have been defined in [3], [20]. The latter presents an
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
From Tweets to Polls: Linking Text Sentiment to Public Opinion Time Series We connect measures of public opinion measured from polls with sentiment measured from text. We analyze several surveys on consumer confidence and political opinion over the 2008 to 2009 period, and find they correlate to sentiment word frequencies in contempora- neous Twitter messages. While our results vary across datasets, in several cases the correlations are as high as 80%, and capture important large-scale trends. The re- sults highlight the potential of text streams as a substi- tute and supplement for traditional polling.
The Party Is Over Here: Structure and Content in the 2010 Election.
Predicting Elections with Twitter: What 140 Characters Reveal about Political Sentiment Twitter is a microblogging website where users read and write millions of short messages on a variety of topics every day. This study uses the context of the German federal election to investigate whether Twitter is used as a forum for political deliberation and whether online messages on Twitter validly mirror offline political sentiment. Using LIWC text analysis software, we conducted a content analysis of over 100,000 messages containing a reference to either a political party or a politician. Our results show that Twitter is indeed used extensively for political deliberation. We find that the mere number of messages mentioning a party reflects the election result. Moreover, joint mentions of two parties are in line with real world political ties and coalitions. An analysis of the tweets' political sentiment demonstrates close correspondence to the parties' and politicians' political positions indicating that the content of Twitter messages plausibly reflects the offline political landscape. We discuss the use of microblogging message content as a valid indicator of political sentiment and derive suggestions for further research.
Opinion mining in social media: Modeling, simulating, and forecasting political opinions in the web Affordable and ubiquitous online communications (social media) provide the means for flows of ideas and opinions and play an increasing role for the transformation and cohesion of society – yet little is understood about how online opinions emerge, diffuse, and gain momentum. To address this problem, an opinion formation framework based on content analysis of social media and sociophysical system modeling is proposed. Based on prior research and own projects, three building blocks of online opinion tracking and simulation are described: (1) automated topic, emotion and opinion detection in real-time, (2) information flow modeling and agent-based simulation, and (3) modeling of opinion networks, including special social and psychological circumstances, such as the influence of emotions, media and leaders, changing social networks etc. Finally, three application scenarios are presented to illustrate the framework and motivate further research.
TUGAS: Exploiting Unlabelled Data for Twitter Sentiment Analysis
Building a sentiment lexicon for social judgement mining We present a methodology for automatically enlarging a Portuguese sentiment lexicon for mining social judgments from text, i.e., detecting opinions on human entities. Starting from publicly-availabe language resources, the identification of human adjectives is performed through the combination of a linguistic-based strategy, for extracting human adjective candidates from corpora, and machine learning for filtering the human adjectives from the candidate list. We then create a graph of the synonymic relations among the human adjectives, which is built from multiple open thesauri. The graph provides distance features for training a model for polarity assignment. Our initial evaluation shows that this method produces results at least as good as the best that have been reported for this task.
POPSTAR at RepLab 2013: Name Ambiguity Resolution on Twitter.
Chunking with support vector machines We apply Support Vector Machines (SVMs) to identify English base phrases (chunks). SVMs are known to achieve high generalization performance even with input data of high dimensional feature spaces. Furthermore, by the Kernel principle, SVMs can carry out training with smaller computational overhead independent of their dimensionality. We apply weighted voting of 8 SVMs-based systems trained with distinct chunk representations. Experimental results show that our approach achieves higher accuracy than previous approaches.
Generative communication in Linda Generative communication is the basis of a new distributed programming langauge that is intended for systems programming in distributed settings generally and on integrated network computers in particular. It differs from previous interprocess communication models in specifying that messages be added in tuple-structured form to the computation environment, where they exist as named, independent entities until some process chooses to receive them. Generative communication results in a number of distinguishing properties in the new language, Linda, that is built around it. Linda is fully distributed in space and distributed in time; it allows distributed sharing, continuation passing, and structured naming. We discuss these properties and their implications, then give a series of examples. Linda presents novel implementation problems that we discuss in Part II. We are particularly concerned with implementation of the dynamic global name space that the generative communication model requires.
The ESTEREL Synchronous Programming Language and its Mathematical Semantics Without Abstract
A framework for expressing the relationships between multiple views in requirements specification Composite systems are generally comprised of heterogeneous components whose specifications are developed by many development participants. The requirements of such systems are invariably elicited from multiple perspectives that overlap, complement, and contradict each other. Furthermore, these requirements are generally developed and specified using multiple methods and notations, respectively. It is therefore necessary to express and check the relationships between the resultant specification fragments. We deploy multiple ViewPoints that hold partial requirements specifications, described and developed using different representation schemes and development strategies. We discuss the notion of inter-ViewPoint communication in the context of this ViewPoints framework, and propose a general model for ViewPoint interaction and integration. We elaborate on some of the requirements for expressing and enacting inter-ViewPoint relationships-the vehicles for consistency checking and inconsistency management. Finally, though we use simple fragments of the requirements specification method CORE to illustrate various components of our work, we also outline a number of larger case studies that we have used to validate our framework. Our computer-based ViewPoints support environment, The Viewer, is also briefly described.
Compact and localized distributed data structures This survey concerns the role of data structures for compactly storing and representing various types of information in a localized and distributed fashion. Traditional approaches to data representation are based on global data structures, which require access to the entire structure even if the sought information involves only a small and local set of entities. In contrast, localized data representation schemes are based on breaking the information into small local pieces, or labels, selected in a way that allows one to infer information regarding a small set of entities directly from their labels, without using any additional (global) information. The survey concentrates mainly on combinatorial and algorithmic techniques, such as adjacency and distance labeling schemes and interval schemes for routing, and covers complexity results on various applications, focusing on compact localized schemes for message routing in communication networks.
A Picture from the Model-Based Testing Area: Concepts, Techniques, and Challenges Model-Based Testing (MBT) represents a feasible and interesting testing strategy where test cases are generated from formal models describing the software behavior/structure. The MBT field is continuously evolving, as it could be observed in the increasing number of MBT techniques published at the technical literature. However, there is still a gap between researches regarding MBT and its application in the software industry, mainly occasioned by the lack of information regarding the concepts, available techniques, and challenges in using this testing strategy in real software projects. This chapter presents information intended to support researchers and practitioners reducing this gap, consequently contributing to the transfer of this technology from the academia to the industry. It includes information regarding the concepts of MBT, characterization of 219 MBT available techniques, approaches supporting the selection of MBT techniques for software projects, risk factors that may influence the use of these techniques in the industry together with some mechanisms to mitigate their impact, and future perspectives regarding the MBT field.
Reversible Denoising and Lifting Based Color Component Transformation for Lossless Image Compression An undesirable side effect of reversible color space transformation, which consists of lifting steps (LSs), is that while removing correlation it contaminates transformed components with noise from other components. Noise affects particularly adversely the compression ratios of lossless compression algorithms. To remove correlation without increasing noise, a reversible denoising and lifting step (RDLS) was proposed that integrates denoising filters into LS. Applying RDLS to color space transformation results in a new image component transformation that is perfectly reversible despite involving the inherently irreversible denoising; the first application of such a transformation is presented in this paper. For the JPEG-LS, JPEG 2000, and JPEG XR standard algorithms in lossless mode, the application of RDLS to the RDgDb color space transformation with simple denoising filters is especially effective for images in the native optical resolution of acquisition devices. It results in improving compression ratios of all those images in cases when unmodified color space transformation either improves or worsens ratios compared with the untransformed image. The average improvement is 5.0–6.0% for two out of the three sets of such images, whereas average ratios of images from standard test-sets are improved by up to 2.2%. For the efficient image-adaptive determination of filters for RDLS, a couple of fast entropy-based estimators of compression effects that may be used independently of the actual compression algorithm are investigated and an immediate filter selection method based on the detector precision characteristic model driven by image acquisition parameters is introduced.
1.025761
0.026911
0.024336
0.019375
0.019375
0.009688
0.002676
0.000276
0
0
0
0
0
0
Real-time constraints in a rapid prototyping language This paper presents real-time constraints of a prototyping language and some mechanisms for handling these constraints in rapidly prototyping embedded systems. Rapid prototyping of embedded systems can be accomplished using a Computer Aided Prototyping System (CAPS) and its associated Prototyping Language (PSDL) to aid the designer in handling hard real-time constraints. The language models time critical operations with maximum execution times, maximum response times and minimum periods. The mechanisms for expressing timing constraints in PSDL are described along with their meanings relative to a series of hardware models which include multi-processor configurations. We also describe a language construct for specifying the policies governing real-time behavior under overload conditions.
Tools for specifying real-time systems Tools for formally specifying software for real-time systems have strongly improved their capabilities in recent years. At present, tools have the potential for improving software quality as well as engineers' productivity. Many tools have grown out of languages and methodologies proposed in the early 1970s. In this paper, the evolution and the state of the art of tools for real-time software specification is reported, by analyzing their development over the last 20 years. Specification techniques are classified as operational, descriptive or dual if they have both operational and descriptive capabilities. For each technique reviewed three different aspects are analyzed, that is, power of formalism, tool completeness, and low-level characteristics. The analysis is carried out in a comparative manner; a synthetic comparison is presented in the final discussion where the trend of technology improvement is also analyzed.
Timing requirements for time-driven systems using augmented Petri Nets A methodology for the statement of timing requirements is presented for a class of embedded computer systems. The notion of a "time-driven" system is introduced which is formalized using a Petri net model augmented with timing information. Several subclasses of time-driven systems are defined with increasing levels of complexity. By deriving the conditions under which the Petri net model can be proven to be safe in the presence of time, timing requirements for modules in the system can be obtained. Analytical techniques are developed for proving safeness in the presence of time for the net constructions used in the defined subclasses of time-driven systems.
Issues in the Development of Large, Distributed, and Reliable Software
Automated consistency checking of requirements specifications This article describes a formal analysis technique, called consistency checking, for automatic detection of errors, such as type errors, nondeterminism, missing cases, and circular definitions, in requirements specifications. The technique is designed to analyze requirements specifications expressed in the SCR (Software Cost Reduction) tabular notation. As background, the SCR approach to specifying requirements is reviewed. To provide a formal semantics for the SCR notation and a foundation for consistency checking, a formal requirements model is introduced; the model represents a software system as a finite-state automation which produces externally visible outputs in response to changes in monitored environmental quantities. Results of two experiments are presented which evaluated the utility and scalability of our technique for consistency checking in real-world avionics application. The role of consistency checking during the requirements phase of software development is discussed.
Specware: Formal Support for Composing Software
Abstract interpretation of reactive systems The advent of ever more complex reactive systems in increasingly critical areas calls for the development of automated verication techniques. Model checking is one such technique, which has proven quite successful. However, the state-explosion problem remains a major stumbling block. Recent experience indicates that solutions are to be found in the application of techniques for property-preserving abstraction and successive approximation of models. Most such applications have so far been based solely on the property-preserving characteristics of simulation relations. A major drawback of all these results is that they do not oer a satisfactory formalization of the notion of precision of abstractions. The theory of Abstract Interpretation oers a framework for the denition and justication of property-preserving abstractions. Furthermore, it provides a method for the eective computation of abstract models directly from the text of a program, thereby avoiding the need for intermediate storage of a full-blown model. Finally, it formalizes the notion of optimality, while allowing to trade precision for speed by computing suboptimal approximations. For a long time, applications of Abstract Interpretation have mainly focused on the analysis of universal safety properties, i.e., properties that hold in all states along every possible execution path. In this article, we extend Abstract Interpretation to the analysis of both existential and universal reactive properties, as expressible in the modal -calculus .I t is shown how abstract models may be constructed by symbolic execution of programs. A notion of approximation between abstract models is dened while conditions are given under which optimal models can be constructed. Examples are given to illustrate this. We indicate conditions under which also falsehood of formulae is preserved. Finally, we compare our approach to those based on simulation relations.
An exploratory contingency model of user participation and MIS use A model is proposed of the relationship between user participation and degree of MIS usage. The model has four dimensions: participation characteristics, system characteristics, system initiator, and the system development environment. Stages of the System Development Life Cycle are considered as a participation characteristics, task complexity as a system characteristics, and top management support and user attitudes as parts of the system development environment. The data are from a cross-sectional survey in Korea, covering 134 users of 77 different information systems in 32 business firms. The results of the analysis support the proposed model in general. Several implications of this for MIS managers are then discussed.
Cooperation without communication Intelligent agents must be able to interact even without the benefit of communication. In this paper we examine various constraints on the actions of agents in such situations and discuss the effects of these constraints on their derived utility. In particular, we define and analyze basic rationality; we consider various assumptions about independence; and we demonstrate the advantages of extending the definition of rationality from individual actions to decision procedures.
Software requirements: Are they really a problem? Do requirements arise naturally from an obvious need, or do they come about only through diligent effort—and even then contain problems? Data on two very different types of software requirements were analyzed to determine what kinds of problems occur and whether these problems are important. The results are dramatic: software requirements are important, and their problems are surprisingly similar across projects. New software engineering techniques are clearly needed to improve both the development and statement of requirements.
Formalising java's data race free guarantee We formalise the data race free (DRF) guarantee provided by Java, as captured by the semi-formal Java Memory Model (JMM) [1] and published in the Java Language Specification [2]. The DRF guarantee says that all programs which are correctly synchronised (i.e., free of data races) can only have sequentially consistent behaviours. Such programs can be understood intuitively by programmers. Formalisation has achieved three aims. First, we made definitions and proofs precise, leading to a better understanding; our analysis found several hidden inconsistencies and missing details. Second, the formalisation lets us explore variations and investigate their impact in the proof with the aim of simplifying the model; we found that not all of the anticipated conditions in the JMM definition were actually necessary for the DRF guarantee. This allows us to suggest a quick fix to a recently discovered serious bug [3] without invalidating the DRF guarantee. Finally, the formal definition provides a basis to test concrete examples, and opens the way for future work on JMM-aware logics for concurrent programs.
Intuitionistic Refinement Calculus Refinement calculi are program logics which formalize the “top-down” methodology of software development promoted by Dijkstra and Wirth in the early days of structured programming. I present here the shallow embedding of a refinement calculus into constructive type theory. This embedding involves monad transformers and the computational reflexion of weakest-preconditions, using a continuation passing style. It should allow to reason about many programs combining non-functional features (state, exceptions, etc) with purely functional ones (higher-order functions, structural recursion, etc).
A rigorous method for the constructive design of parallel and distributed programs Parallel and distributed systems engineers are always looking for a way to speed-up their programs. They sometimes forget that well-structured programs are more flexible, and therefore easier to modify or restructure in order to improve performance or to map onto a particular architecture. This paper illustrates a systematic way of designing well-structured parallel and distributed programs. The method is based on SASD, one of the most popular methods for the analysis and design of sequential systems, and CSP, a formalism for specifying the behaviour of communicating systems. The influence of SASD is evident in the way diagrams are used during the various phases of the development. CSP allows us to formally verify and transform the programs. The main feature of our method is the ability to reuse behavioural specifications, the way the components synchronise and communicate, and provide rules to verify and transform the design structure.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.112042
0.018689
0.014445
0.005001
0.003761
0.002
0.001146
0.000706
0.000383
0.000086
0.000001
0
0
0
The role of knowledge in software development Software development is knowledge-intensive. Many concepts have been developed to ease or guide the processing of knowledge in software development, including information hiding, modularity, objects, functions and procedures, patterns, and more. These concepts are supported by various methods, approaches, and tools using symbols, graphics, and languages. Some are formal; others are semiformal or simply made up of key practices. Methods and approaches in software engineering are often based on the results of empirical observations or on individual success stories.
Organizing usability work to fit the full product range
Integrating multiple paradigms within the blackboard framework While early knowledge-based systems suffered the frequent criticism of having little relevance to the real world, an increasing number of current applications deal with complex, real-world problems. Due to the complexity of real-world situations, no one general software technique can produce adequate results in different problem domains, and artificial intelligence usually needs to be integrated with conventional paradigms for efficient solutions. The complexity and diversity of real-world applications have also forced the researchers in the AI field to focus more on the integration of diverse knowledge representation and reasoning techniques for solving challenging, real-world problems. Our development environment, BEST (Blackboard-based Expert Systems Toolkit), is aimed to provide the ability to produce large-scale, evolvable, heterogeneous intelligent systems. BEST incorporates the best of multiple programming paradigms in order to avoid restricting users to a single way of expressing either knowledge or data. It combines rule-based programming, object-oriented programming, logic programming, procedural programming and blackboard modelling in a single architecture for knowledge engineering, so that the user can tailor a style of programming to his application, using any or arbitrary combinations of methods to provide a complete solution. The deep integration of all these techniques yields a toolkit more effective even for a specific single application than any technique in isolation or collections of multiple techniques less fully integrated. Within the basic, knowledge-based programming paradigm, BEST offers a multiparadigm language for representing complex knowledge, including incomplete and uncertain knowledge. Its problem solving facilities include truth maintenance, inheritance over arbitrary relations, temporal and hypothetical reasoning, opportunistic control, automatic partitioning and scheduling, and both blackboard and distributed problem-solving paradigms.
Animation of Object-Z Specifications with a Set-Oriented Prototyping Language
Generating Conditional Plans and Programs
Verifying task-based specifications in conceptual graphs A conceptual model is a model of real world concepts and application domains as perceived by users and developers. It helps developers investigate and represent the semantics of the problem domain, as well as communicate among themselves and with users. In this paper, we propose the use of task-based specifications in conceptual graphs (TBCG) to construct and verify a conceptual model. Task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in the conceptual model; whereas conceptual graphs are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of verification. Verifying a conceptual model is performed on model specifications of a task through constraints satisfaction and relaxation techniques, and on process specifications of the task based on operators and rules of inference inherited in conceptual graphs.
Object-oriented and conventional analysis and design methodologies Three object-oriented analysis methodologies and three object-oriented design methodologies are reviewed and compared to one another. The authors' intent is to answer the question of whether emerging object-oriented analysis and design methodologies require incremental or radical changes on the part of prospective adopters. The evolution of conventional development methodologies is discussed, and three areas-system partitioning, end-to-end process modeling, and harvesting reuse-that appear to be strong candidates for further development work are presented.<>
Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers.
From information system requirements to designs: a mapping framework Comprehensive methodologies for information system development need to provide a framework for the adequate representation of system requirements and also for their usage in generating system designs. Requirements specifications are assumed to include a functional description of what the information system is intended to do, how it will interact with its environment, what information it will manage and how that information relates to the system's environment. p]The generation of a design is achieved by mapping elements of the requirements model into one or more corresponding design objects. This mapping process is guided by two considerations. Locally, the process is directed by dependency types among requirements and design objects which determine allowable mappings for a particular requirements object. Globally, the process is guided by non-functional requirements, such as accuracy and security requirements on the intended system, which are represented as goals describing desirable properties of the intended system. Satisficing methods for these goals are used to guide local mapping decisions. p]The paper includes the description of a prototype implementation—called IRIS—of aspects of the proposed mapping framework and illustrates its features through a sample session. The implementation was carried out within the DAIDA project at the Institute of Computer Science of the Foundation for Research and Technology, Crete.
Requirements monitoring in dynamic environments We propose requirements monitoring to aid in the maintenance of systems that reside in dynamic environments. By requirements monitoring we mean the insertion of code into a running system to gather information from which it can he determined whether, and to what degree, that running system is meeting its requirements. Monitoring is a commonly applied technique in support of performance tuning, but the focus therein is primarily on computational performance requirements in short runs of systems. We wish to address systems that operate in a long lived, ongoing fashion in nonscientific enterprise applications. We argue that the results of requirements monitoring can be of benefit to the designers, maintainers and users of a system-alerting them when the system is being used in an environment for which it was not designed, and giving them the information they need to direct their redesign of the system. Studies of two commercial systems are used to illustrate and justify our claims.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
Rodin: an open toolset for modelling and reasoning in Event-B Event-B is a formal method for system-level modelling and analysis. Key features of Event-B are the use of set theory as a modelling notation, the use of refinement to represent systems at different abstraction levels and the use of mathematical proof to verify consistency between refinement levels. In this article we present the Rodin modelling tool that seamlessly integrates modelling and proving. We outline how the Event-B language was designed to facilitate proof and how the tool has been designed to support changes to models while minimising the impact of changes on existing proofs. We outline the important features of the prover architecture and explain how well-definedness is treated. The tool is extensible and configurable so that it can be adapted more easily to different application domains and development methods.
On ternary square-free circular words Circular words are cyclically ordered finite sequences of letters. We give a computer-free proof of the following result by Currie: square-free circular words over the ternary alphabet exist for all lengths l except for 5, 7, 9, 10, 14, and 17. Our proof reveals an interesting connection between ternary square-free circular words and closed walks in the K(3,3) graph. In addition, our proof implies an exponential lower bound on the number of such circular words of length l and allows one to list all lengths l for which such a circular word is unique up to isomorphism.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.220057
0.220057
0.220057
0.110029
0.075253
0.044352
0.005029
0.000213
0
0
0
0
0
0
An algorithm for blob hierarchy layout We present an algorithm for the aesthetic drawing of basic hierarchical blob structures, of the kind found in higraphs and statecharts and in other diagrams in which hierarchy is depicted as topological inclusion. Our work could also be useful in window system dynamics, and possibly also in things like newspaper layout, etc. Several criteria for aesthetics are formulated, and we discuss their motivation, our methods of implementation and the algorithm's performance.
Drawing Hypergraphs in the Subset Standard (Short Demo Paper) We report an experience on a practical system for drawing hypergraphs in the subset standard. The PATATE system is based on the application of a classical force directed method to a dynamic graph, which is deduced, at a given iteration time, from the hypergraph structure and particular vertex locations. Different strategies to define the dynamic underlying graph are presented. We illustrate in particular the method when the graph is obtained by computing an Euclidean Steiner tree.
How To Draw A Hypergraph There is an increasing amount of applications in computer science and other fields in which hypergraphs are used. This paper shows that in many cases the problem of drawing a hypergraph can be reduced to the problem of drawing normal graphs. This holds true especially when considering hypergraphs drawing in the edge standard, i.e. when the hyperedges connecting the vertices are drawn as curves.
Nesting in Euler Diagrams: syntax, semantics and construction This paper considers the notion of nesting in Euler diagrams, and how nesting affects the interpretation and construction of such diagrams. After setting up the necessary definitions for concrete Euler diagrams (drawn in the plane) and abstract diagrams (having just formal structure), the notion of nestedness is defined at both concrete and abstract levels. The concept of a dual graph is used to give an alternative condition for a drawable abstract Euler diagram to be nested. The natural progression to the diagram semantics is explored and we present a “nested form” for diagram semantics. We describe how this work supports tool-building for diagrams, and how effective we might expect this support to be in terms of the proportion of nested diagrams.
Constraint Diagrams: A Step Beyond UML The Unified Modeling Language (UML) is a set of notations for modelling object-oriented systems. It has become the de facto standard. Most of its notations are diagrammatic. An exception to this is the Object Constraint Language (OCL) which is essentially a textual, stylised form of first order predicate logic. We describe a notation, constraint diagrams, which were introduced as a visual technique intended to be used in conjunction with the UML for object-oriented modelling. Constraint diagrams provide a diagrammatic notation for expressing constraints (e.g., invariants) that could only be expressed in UML using OCL.
Query Optimization Techniques Utilizing Path Indexes in Object-Oriented Database Systems We propose query optimization techniquesthat fully utilize the advantages of path indexesin object-oriented database systems. Althoughpath indexes provide an efficient accessto complex objects, little research has beendone on query optimization that fully utilizepath indexes. We first devise a generalizedindex intersection technique, adapted to thestructure of the path index extended fromconventional indexes, for utilizing multiple(path) indexes to access each class in a query.We...
Behavioural Constraints Using Events
Object-oriented modeling and design
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Combinatorial Landscapes. Fitness landscapes have proven to be a valuable concept in evolutionary biology, combinatorial optimization, and the physics of disordered systems. A fitness landscape is a mapping from a configuration space into the real numbers. The configuration space is equipped with some notion of adjacency, nearness, distance, or accessibility. Landscape theory has emerged as an attempt to devise suitable mathematical structures for describing the ``static'' properties of landscapes as well as their influence on the dynamics of adaptation. In this review we focus on the connections of landscape theory with algebraic combinatorics and random graph theory, where exact results are available.
Conceptual Structures: Fulfilling Peirce's Dream, Fifth International Conference on Conceptual Structures, ICCS '97, Seattle, Washington, USA, August 3-8, 1997, Proceedings
Visual Query Systems for Databases: A Survey Visual query systems (VQSs) are query systems for databases that use visual representations to depict the domain of interest and express related requests. VQSs can be seen as an evolution of query languages adopted into database management systems; they are designed to improve the effectiveness of the human–computer communication. Thus, their most important features are those that determine the nature of the human–computer dialogue. In order to survey and compare existing VQSs used for querying traditional databases, we first introduce a classification based on such features, namely the adopted visual representations and the interaction strategies. We then identify several user types and match the VQS classes against them, in order to understand which kind of system may be suitable for each kind of user. We also report usability experiments which support our claims. Finally, some of the most important open problems in the VQS area are described.
Developing Mode-Rich Satellite Software by Refinement in Event B To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions. In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based verification of their mode consistency.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.222
0.222
0.111
0.089067
0.006
0.000088
0.000022
0
0
0
0
0
0
0
Manipulating and documenting software structures using SHriMP views An effective approach to program understanding involves browsing, exploring, and creating views that document software structures at different levels of abstraction. While exploring the myriad of relationships in a multi-million line legacy system, one can easily loose context. One approach to alleviate this problem is to visualize these structures using fisheye techniques. This paper introduces Simple Hierarchical Multi-Perspective views (SHriMPs). The SHriMP visualization technique has been incorporated into the Rigi reverse engineering system. This greatly enhances Rigi's capabilities for documenting design patterns and architectural diagrams that span multiple levels of abstraction. The applicability and usefulness of SHriMPs is illustrated with selected program understanding tasks.
Foundations of 4Thought 4Thought, a prototype design tool, is based on the notion that design artifacts are complex, formal, mathematical objects that require complementary textual and graphical views to be adequately comprehended. This paper describes the combined use of Entity- Relationship modelling and GraphLog to bridge the textual and graphical views. These techniques are illustrated by an example that is formally specified in Z Notation.
Visualizing queries and querying visualizations this paper, we describe the approach to visual display and manipulation of databases that we have beeninvestigating at the University of Toronto for the past few years. We present an overview and retrospectiveof the G
Visualization of structural information: automatic drawing of compound digraphs An automatic method for drawing compound digraphs that contain both inclusion edges and adjacency edges are presented. In the method vertices are drawn as rectangles (areas for texts, images, etc.), inclusion edges by the geometric inclusion among the rectangles, and adjacency edges by arrows connecting them. Readability elements such as drawing conventions and rules are identified, and a heuristic algorithm to generate readable diagrams is developed. Several applications are shown to demonstrate the effectiveness of the algorithm. The utilization of curves to improve the quality of diagrams is investigated. A possible set of command primitives for progressively organizing structures within this graph formalism is discussed. The computational time for the applications shows that the algorithm achieves satisfactory performance
Drawing Clustered Graphs on an Orthogonal Grid Clustered graphs are graphs with recursive clustering structures over the vertices. For graphical representation, the clustering structure is rep- resented by a simple region that contains the drawing of all the vertices which belong to that cluster. In this paper, we present an algorithm which produces planar drawings of clustered graphs in a convention known as orthogonal grid rectangular cluster drawings. If the input graph has n vertices, then the algorithm produces in O(n) time a drawing with O(n2) area and at most 3 bends in each edge. This result is as good as existing results for classical planar graphs. Further, we show that our algorithm is optimal in terms of the number of bends per edge.
ENIAM: a more complete conceptual schema language
Towards Event-Driven Modelling for Database Design
Stepwise Removal of Virtual Channels in Distributed Algorithms A stepwise refinement method for the design of correct distributed algorithms is studied. The method frees the program designer from all the details of the target architecture of the system in early stages of the design process. The method is applied to a new aspect in the construction of distributed systems, the removal of virtual channels. We exemplify the design method by deriving a distributed algorithm. We show that the performed refinements preserve the correctness of the algorithm.
An Effective Implementation for the Generalized Input-Output Construct of CSP
Distributed data structures in Linda A distributed data structure is a data structure that can be manipulated by many parallel processes simultaneously. Distributed data structures are the natural complement to parallel program structures, where a parallel program (for our purposes) is one that is made up of many simultaneously active, communicating processes. Distributed data structures are impossible in most parallel programming languages, but they are supported in the parallel language Linda and they are central to Linda programming style. We outline Linda, then discuss some distributed data structures that have arisen in Linda programming experiments to date. Our intent is neither to discuss the design of the Linda system nor the performance of Linda programs, though we do comment on both topics; we are concerned instead with a few of the simpler and more basic techniques made possible by a language model that, we argue, is subtly but fundamentally different in its implications from most others.This material is based upon work supported by the National Science Foundation under Grant No. MCS-8303905. Jerry Leichter is supported by a Digital Equipment Corporation Graduate Engineering Education Program fellowship.
The three dimensions of requirements engineering: a framework and its applications There is an increasing number of contributions on how to solve the various problems within requirements engineering (RE). The purpose of this paper is to identify the main goals to be reached during the RE process in order to develop a framework for RE. This framework consists of three dimensions: • • the specification dimension • • the representation dimension • • the agreement dimension. We show how this framework can be used to classify and clarify current RE research as well as RE support offered by methods and tools. In addition, the framework can be applied to the analysis of existing RE practise and the establishment of suitable process guidance. Last but not least, the framework offers a first step towards a common understanding of RE.
Laws of data refinement A specification language typically contains sophisticated data types that are expensive or even impossible to implement. Their replacement with simpler or more efficiently implementable types during the programming process is called data refinement. We give a new formal definiton of data refinement and use it to derive some basic laws. The derived laws are constructive in that used in conjunction with the known laws of procedural refinement they allow us to calculate a new specification from a given one in which variables are to be replaced by other variables of a different type.
An algorithm for blob hierarchy layout We present an algorithm for the aesthetic drawing of basic hierarchical blob structures, of the kind found in higraphs and statecharts and in other diagrams in which hierarchy is depicted as topological inclusion. Our work could also be useful in window system dynamics, and possibly also in things like newspaper layout, etc. Several criteria for aesthetics are formulated, and we discuss their motivation, our methods of implementation and the algorithm's performance.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.101724
0.000667
0.0005
0.00017
0.000075
0.000034
0.000009
0
0
0
0
0
0
0
On coons and other methods for the representation of curved surfaces Although Coons surfaces are mentioned frequently in the context of computer graphics and computer-aided design, very little of the work has been published. The purpose of this paper is to provide an up to date account of Coons methods and extensions thereof, drawing mainly on unpublished material. The subject is not approached from a rigorous mathematical point of view, which can be found elsewhere, but from the stand point of the computer scientist or engineer who wishes to implement or use such methods. An extensive bibliography of the subject, including unpublished papers, is appended.
Optimal, efficient, recursive edge detection filters The design of an optimal, efficient, infinite-impulse-response (IIR) edge detection filter is described. J. Canny (1986) approached the problem by formulating three criteria designed in any edge detection filter: good detection, good localization, and low spurious response. He maximized the product of the first two criteria while keeping the spurious response criterion constant. Using the variational approach, he derived a set of finite extent step edge detection filters corresponding to various values of the spurious response criterion, approximating the filters by the first derivative of a Gaussian. A more direct approach is described in this paper. The three criteria are formulated as appropriate for a filter of infinite impulse response, and the calculus of variations is used to optimize the composite criteria. Although the filter derived is also well approximated by first derivative of a Gaussian, a superior recursively implemented approximation is achieved directly. The approximating filter is separable into two linear filters operating in two orthogonal directions allowing for parallel edge detection processing. The implementation is very simple and computationally efficient
Robust detection of region boundaries in a sequence of images The problem of region recognition in a sequence of images is addressed, and a recognition system that finds and tracks region-of-interest boundaries in those images is presented. These regions are not stationary: parts of the boundary may be missing or completely blurred and outliers are likely to exist. Thus, the emphasis is on robustification and efficiency. The region segmentation problem was formulated as a multihypothesis test that seeks the boundary that maximizes a performance criterion which is general in terms of blur and noise. Efficiency is obtained by restricting outline candidates to an adaptive search area near the optimal boundary from the previous section. The search for the maximum is cast into a fast first-order dynamic programming procedure. Robust statistical techniques are used in the multihypothesis test to reduce the sensitivity to outliers and unexpected noise. The inconsistent parts of the optimal boundary are then detected by using a robust expectation maximization algorithm and are interpolated from higher-quality parts. The boundary obtained by this method is used as the reference boundary for the next image
Image transformation approach to nonlinear shape restoration Nonlinear shape distortions are considered as uncertainty in computer vision, robot vision, and pattern recognition. A new approach to nonlinear shape restoration based on nonlinear image shape transformation is proposed. The principal idea of this method is that two-dimensional (2-D) transformation is used to approximate a three-dimensional (3-D) problem. Five particular image transformation models, bilinear, quadratic, cubic, biquadratic, and bicubic models, are presented in this paper to handle some special cases. Two general transformation models, Coons and harmonic models, are also introduced to tackle more general and more complicated problems. These models are derived from finite-element theory and they can be used to approximate some nonlinear shape distortions under certain conditions. Furthermore, their inverse transformations can be used to remove nonlinear shape distortions. Some useful algorithms are developed. The performance of the proposed approach for nonlinear shape restoration has been evaluated in several experiments with interesting results
A comparative study of nonlinear shape models for digital image processing and pattern recognition Four nonlinear shape models are presented: polynomial, Coons, perspective, and projective modes. Algorithms and some properties of these models are provided. For a given physical model, such as a perspective model, comparisons are made with other mathematical models. It is proved that, under certain conditions, the perspective models can be replaced by the Coons models. Problems related to substitution and approximation of practical models that facilitate digital image processing are raised and discussed. Experimental results on digital images are presented
Moment images, polynomial fit filters. and the problem of surface interpolation A uniform hierarchical procedure for processing incomplete image data is describes. It begins with the computation of local moments within windows centered on each output sample point. Arrays of such measures, called moment images, are computed efficiently through the application of a series of small kernel filters. A polynomial surface is then fit to the available image data within a local neighborhood of each sample point. Best-fit polynomials are obtained from the corresponding local moments. The procedure, hierarchical polynomial fit filtering, yields a multiresolution set of low-pass filtered images. The set of low-pass images is combined by multiresolution interpolation to form a smooth surface passing through the original image data
Decomposition of gray-scale morphological structuring elements Mathematical morphology has been developed recently for many applications in image processing and analysis. Most image processing architectures adapted to morphological operations use structuring elements of limited size. Implementation difficulties arise when an algorithm requires the use of a large size structuring element. In this paper we present techniques for decomposing big grayscale morphological structuring elements into combined structures of segmented small components. According to mathematical morphology properties, such decomposition allows us to equate morphological operations on big structuring elements with operations on decomposed small structuring components. The decomposition is suitable for parallel pipelined architecture. This technique will allow full freedom for users to design any kind and any size of gray-scale morphological structuring element.
Effectiveness of exhaustive search and template matching against watermark desynchronization By focusing on a simple example, we investigate the effectiveness of exhaustive watermark detection and resynchronization through template matching against watermark desynchronization. We find that if the size of the search space does not increase exponentially, both methods provide asymptotically good results. We also show that the exhaustive search approach outperforms template matching from the point of view of reliable detection.
Optimal prefix codes for sources with two-sided geometric distributions A complete characterization of optimal prefix codes for off-centered, two-sided geometric distributions of the integers is presented. These distributions are often encountered in lossless image compression applications, as probabilistic models for image prediction residuals. The family of optimal codes described is an extension of the Golomb codes, which are optimal for one-sided geometric distributions. The new family of codes allows for encoding of prediction residuals at a complexity similar to that of Golomb codes, without recourse to the heuristic approximations frequently used when modifying a code designed for nonnegative integers so as to apply to the encoding of any integer. Optimal decision rules for choosing among a lower complexity subset of the optimal codes, given the distribution parameters, are also investigated, and the relative redundancy of the subset with respect to the full family of optimal codes is bounded
Tangible User Interfaces: Past, Present, and Future Directions In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in order to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This monograph examines the existing body of work on Tangible User Interfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frameworks and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, psychology, and philosophy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limitations of TUIs and chart directions for future research.
Consistency of the static and dynamic components of object-oriented specifications Object-oriented (OO) modeling and design methodologies have been receiving a significant attention since they allow a quick and easy-to-gasp overview about a complex model. However, in the literature there are no formal frameworks that allow designers to verify the consistency (absence of contradictions) of both the static and dynamic components of the specified models, that are often assumed to be consistent. In this paper, a unifying formal framework is proposed that allows the consistency checking of both the static and dynamic components of a simplified OO model.
Specification of real-time systems in real-time temporal interval logic A real-time variant of temporal interval logic is proposed for the specification and reasoning of real-time systems. In the framework of the logic, it is possible to specify qualitative and quantitative aspects of temporal behaviors of systems. The formalism provides capabilities for quantitative specification of time behavior. The harmonization of temporal interval logic with real-time features leads to a very-high-level notation for the specification of real-time systems. Temporal interval logic, being event-based, also facilitates the specification of quantitative aspects of temporal behavior relative to the occurrence of events in a given context. The use of the formalism is shown for three examples of real-time system specification: a packet network with rerouting, a traffic-light controller, and a time-constrained broadcast bus protocol
A knowledge representation language for requirements engineering Requirements engineering, the phase of software development where the users' needs are investigated, is more and more shifting its concern from the target system towards its environment. A new generation of languages is needed to support the definition of application domain knowledge and the behavior of the universe around the computer. This paper assesses the applicability of classical knowledge representation techniques to this purpose. Requirements engineers insist, however, more on natural representation, whereas expert systems designers insist on efficient automatic use of the knowledge. Given this priority of expressiveness, two candidates emerge: the semantic networks and the techniques based on logic. They are combined in a language called the ERAE model, which is illustrated on examples, and compared to other requirements engineering languages.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.052904
0.052726
0.052726
0.016667
0.009601
0.001021
0.000208
0.000011
0
0
0
0
0
0
Lossless compression of VLSI layout image data. We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research.
CASE tools as collaborative support technologies Since the inception of computers, the software industry has searched for dramatic solutions to its systems development problems. In the latter half of the 1980s and into the 1990s, the search has focused on automated software engineering (computer-assisted software engineering or CASE) tools (see, for example, [17]). Many in the software engineering field claim CASE tools will completely replace the software developer [15]. A more realistic view, however, is that such tools will aid systems developers in the process of specifying, designing, and constructing software systems.
Monitoring software requirements using instrumented code Ideally, software is derived from requirements whose properties have been established as good. However, it is difficult to define and analyze requirements. Moreover derivation of software from requirements is error prone. Finally, the installation and use of compiled software can introduce errors. Thus, it can be difficult to provide assurances about the state of a software's execution. We present a framework to monitor the requirements of software as it executes. The framework is general, and allows for automated support. The current implementation uses a combination of assertion and model checking to inform the monitor. We focus on two issues: (1) the expression of "suspect requirements", and (2) the transparency of the software and its environment to the monitor. We illustrate these issues with the widely known problems of the Dining Philosophers and the CCITT X.509 authentication. Each are represented as Java programs which are then instrumented and monitored.
Deriving Tabular Event-Based Specifications from Goal-Oriented Requirements Models Goal-oriented methods are increasingly popular for elaborating software requirements. They provide systematic support for incrementally building intentional, structural and operational models of the software and its environment together with various techniques for early analysis, e.g., to manage conflicting goals or anticipate abnormal environment behaviors that prevent goals from being achieved. On the other hand, tabular event-based methods are well-established for specifying operational requirements for control software. They provide sophisticated techniques and tools for late analysis of software behavior models through, e.g.,simulation, model checking or table exhaustiveness checks.The paper proposes to take the best out of these two worlds to engineer requirements for control software. It presents a technique for deriving event-based specifications, written in the SCR tabular language, from operational specifications built according to the KAOS goal-oriented method. The technique consists in a series of transformation steps each of which resolves semantic, structural or syntactic differences between the KAOS source language and the SCR target language. Some of these steps need human intervention and illustrate the kind of semantic subtleties that need to be taken into account when integrating multiple formalisms.As a result of our technique SCR specifiers may use upstream goal-based processes à la KAOS for the incremental elaboration, early analysis, organization and documentation of their tables while KAOS modelers may use downstream tables à la SCR for later analysis of the behavior models derived from goal specifications.
Requirements engineering in 2001: (virtually) managing a changing reality Trends in society and technology force requirements engineering to expand its role from a one-shot activity in the development process to a virtual image that accompanies the changing reality of a system. A maturing software market also requires a better understanding of the differentiation in market segments for requirements engineering and standardisation of methodologies within these segments. On the research side, this requires a coherent perspective of hitherto parallel research directions towards a comprehensive understanding of requirements processes, as well as the optimal exploitation of new technologies that support the main role of requirements engineering; mutual learning of all stakeholders concerned
Status report: requirements engineering It is argued that, in general, requirements engineering produces one large document, written in a natural language, that few people bother to read. Projects that do read and follow the document often build systems that do not satisfy needs. The reasons for the current state of the practice are listed. Research areas that have significant payoff potential, including improving natural-language specifications, rapid prototyping and requirements animation, requirements clustering, requirements-based testing, computer-aided requirements engineering, requirements reuse, research into methods, knowledge engineering, formal methods, and a unified framework, are outlined.<>
Goal-Oriented Requirements Engineering: A Guided Tour Abstract: Goals capture, at different levels of abstraction, the various objectives the system under consideration should achieve. Goal-oriented requirements engineering is concerned with the use of goals for eliciting, elaborating, structuring, specifying, analyzing, negotiating, documenting, and modifying requirements. This area has received increasing attention over the past few years. The paper reviews various research efforts undertaken along this line of research. The arguments in favor of goal orientation are first briefly discussed. The paper then com-pares the main approaches to goal modeling, goal specification and goal-based reasoning in the many activities of the requirements engineering process. To make the discussion more concrete, a real case study is used to suggest what a goal-oriented requirements engineering method may look like. Experience with such approaches and tool support are briefly discussed as well.
A requirements taxonomy for reducing Web site privacy vulnerabilities The increasing use of personal information on Web-based applications can result in unexpected disclosures. Consumers often have only the stated Web site policies as a guide to how their information is used, and thus on which to base their browsing and transaction decisions. However, each policy is different, and it is difficult—if not impossible—for the average user to compare and comprehend these policies. This paper presents a taxonomy of privacy requirements for Web sites. Using goal-mining, the extraction of pre-requirements goals from post-requirements text artefacts, we analysed an initial set of Internet privacy policies to develop the taxonomy. This taxonomy was then validated during a second goal extraction exercise, involving privacy policies from a range of health care related Web sites. This validation effort enabled further refinement to the taxonomy, culminating in two classes of privacy requirements: protection goals and vulnerabilities. Protection goals express the desired protection of consumer privacy rights, whereas vulnerabilities describe requirements that potentially threaten consumer privacy. The identified taxonomy categories are useful for analysing implicit internal conflicts within privacy policies, the corresponding Web sites, and their manner of operation. These categories can be used by Web site designers to reduce Web site privacy vulnerabilities and ensure that their stated and actual policies are consistent with each other. The same categories can be used by customers to evaluate and understand policies and their limitations. Additionally, the policies have potential use by third-party evaluators of site policies and conflicts.
Cooperative negotiation in concurrent engineering design Design can be modeled as a cooperative multi-agent problem solving task where different agents possess different knowledge and evaluation criteria. These differences may result in inconsistent design decisions and conflicts that have to be resolved during design. The process by which resolution of inconsistencies is achieved in order to arrive at a coherent set of design decisions is negotiation. In this paper, we discuss some of the characteristics of design which make it a very challenging domain for investigating negotiation techniques. We propose a negotiation model that incorporates accessing information in existing designs, communication of design rationale and criticisms of design decisions, as well as design modifications based on constraint relaxation and comparison of utilities. The model captures the dynamic interactions of the cooperating agents during negotiations. We also present representational structures of the expertise of the various agents and a communication protocol that supports multi-agent negotiation.
Integrating noninterfering versions of programs The need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. To date, the only available tools for assisting with program integration are variants of text-based differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated.This paper concerns the design of a semantics-based tool for automatically integrating program versions. The main contribution of the paper is an algorithm that takes as input three programs A, B, and Base, where A and B are two variants of Base. Whenever the changes made to Base to create A and B do not “interfere” (in a sense defined in the paper), the algorithm produces a program M that integrates A and B. The algorithm is predicated on the assumption that differences in the behavior of the variant programs from that of Base, rather than differences in the text, are significant and must be preserved in M. Although it is undecidable whether a program modification actually leads to such a difference, it is possible to determine a safe approximation by comparing each of the variants with Base. To determine this information, the integration algorithm employs a program representation that is similar (although not identical) to the dependence graphs that have been used previously in vectorizing and parallelizing compilers. The algorithm also makes use of the notion of a program slice to find just those statements of a program that determine the values of potentially affected variables.The program-integration problem has not been formalized previously. It should be noted, however, that the integration problem examined here is a greatly simplified one; in particular, we assume that expressions contain only scalar variables and constants, and that the only statements used in programs are assignment statements, conditional statements, and while-loops.
MULTILISP: a language for concurrent symbolic computation Multilisp is a version of the Lisp dialect Scheme extended with constructs for parallel execution. Like Scheme, Multilisp is oriented toward symbolic computation. Unlike some parallel programming languages, Multilisp incorporates constructs for causing side effects and for explicitly introducing parallelism. The potential complexity of dealing with side effects in a parallel context is mitigated by the nature of the parallelism constructs and by support for abstract data types: a recommended Multilisp programming style is presented which, if followed, should lead to highly parallel, easily understandable programs.Multilisp is being implemented on the 32-processor Concert multiprocessor; however, it is ultimately intended for use on larger multiprocessors. The current implementation, called Concert Multilisp, is complete enough to run the Multilisp compiler itself and has been run on Concert prototypes including up to eight processors. Concert Multilisp uses novel techniques for task scheduling and garbage collection. The task scheduler helps control excessive resource utilization by means of an unfair scheduling policy; the garbage collector uses a multiprocessor algorithm based on the incremental garbage collector of Baker.
A Software Development Environment for Improving Productivity First Page of the Article
Story-map: iPad companion for long form TV narratives Long form TV narratives present multiple continuing characters and story arcs that last over multiple episodes and even over multiple seasons. Writers increasingly take pride in creating coherent and persistent story worlds with recurring characters and references to backstory. Since viewers may join the story at different points and different levels of commitment, they need support to orient them to the fictional world, to remind them of plot threads, and to allow them to review important story sequences across episodes. Using the affordances of the digital medium we can create navigation patterns and auxiliary information streams to minimize confusion and maximize immersion in the story world. In our application, the iPad is used as a secondary screen to create a character map synchronized with the TV content, and to support navigation of story threads across episodes.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.02006
0.015099
0.014298
0.013872
0.007526
0.005184
0.002537
0.00094
0.000179
0.000048
0
0
0
0
Robust sampled-data stabilization of linear systems: an input delay approach A new approach to robust sampled-data control is introduced. The system is modelled as a continuous-time one, where the control input has a piecewise-continuous delay. Sufficient linear matrix inequalities (LMIs) conditions for sampled-data state-feedback stabilization of such systems are derived via descriptor approach to time-delay systems. The only restriction on the sampling is that the distance between the sequel sampling times is not greater than some prechosen h>0 for which the LMIs are feasible. For h→0 the conditions coincide with the necessary and sufficient conditions for continuous-time state-feedback stabilization. Our approach is applied to two problems: to sampled-data stabilization of systems with polytopic type uncertainities and to regional stabilization by sampled-data saturated state-feedback.
On robust stability of aperiodic sampled-data systems - An integral quadratic constraint approach This manuscript is concerned with stability analysis of sampled-data systems with non-uniform sampling patterns. The stability problem is tackled from a continuous-time point of view, via the so-called “input delay approach”, where the “aperiodic sampling operation” is modelled by a “average delay-difference” operator for which characterization based on integral quadratic constrains (IQC) is identified. The system is then viewed as feedback interconnection of a stable linear time-varying system and the “average delay-difference” operator. With the IQCs identified for the “average delay-difference” operator, the IQC theory is applied to derive convex stability conditions. Results of numerical tests are given to illustrate the effectiveness of the proposed approach.
Delay-dependent H∞ synchronization for chaotic neural networks with network-induced delays and packet dropouts. This paper investigates the problem of H ∞ synchronization for chaotic neural networks with network-induced delays and packet dropouts. A novel master-slave synchronization scheme is established where the network-induced delays and data packet dropouts are taken into consideration. By constructing the Lyapunov functional and employing the Wirtinger-based integral inequality, several delay-dependent conditions are obtained to guarantee that the error system is globally asymptotically stable and satisfies a prescribed H ∞ performance constraint. Finally, two numerical examples are presented to validate the feasibility and effectiveness of the results derived.
An optimization-based approach to sampled-data control of networked control systems with multiple delays. A networked control system (NCS) is a control system in which plants, sensors, controllers, and actuators are connected through communication networks. In this paper, the optimal sampled-data control problem of linear systems with multiple time delays, which is one of the fundamental problems in NCSs, is considered, where multiple time delays and the sampling period are uncertain. First, the problem is transformed into the optimal control problem of discrete-time uncertain linear systems, where the intersample behavior is considered in the cost function. Next, under a certain assumption, the obtained problem is further transformed into a convex quadratic programming problem. Finally, a numerical simulation is presented.
Stability analysis of systems with aperiodic sample-and-hold devices Motivated by the widespread use of networked and embedded control systems, improved stability conditions are derived for sampled-data feedback control systems with uncertainly time-varying sampling intervals. The results are derived by exploiting the passivity-type property of the operator arising in the input-delay approach to the system in addition to the gain of the operator, and are hence less conservative than existing ones.
Robust finite-time non-fragile sampled-data control for T-S fuzzy flexible spacecraft model with stochastic actuator faults. In this paper, the problem of robust finite-time non-fragile sampled-data control is investigated for uncertain flexible spacecraft model with stochastic actuator faults based on Takagi-Sugeno (T-S) fuzzy model approach. Specifically, the existence of stochastic actuator faults are described by using the Bernoulli distribution. On the basis of the input-delay approach, the sampled-data system is reformulated to a continuous time-varying delay system. Further, based on Lyapunov functional approach and linear matrix inequality technique, sufficient conditions are derived for the existence of the desired state feedback controller ensuring the stochastic finite-time bounded with prescribed H performance index. Finally, numerical simulations are provided for the practical flexible spacecraft control system to verify the effectiveness and applicability of the proposed control design.
Improved delay-dependent stability criteria for neutral systems with mixed interval time-varying delays and nonlinear disturbances. It is well-known that the stability analysis of time-delay systems is a key step to design appropriate controllers and/or filters for those systems. In this paper, the problem of the delay-dependent stability analysis of neutral systems with mixed interval time-varying delays with/without nonlinear perturbations is revisited. Bounded derivatives of the discrete and neutral delays with upper-bounds not limited to be strictly less than one are considered. New stability criteria are developed using the Lyapunov Krasovskii methodology which are expressed in terms of linear matrix inequalities (LMIs). An augmented Lyapunov Krasovskii functional (LKF) utilizing triple integral terms and the descriptor transformation is employed to this aim. In addition, advanced techniques such as Wirtinger-based single and double-integral inequalities, delay decomposition technique combined with the reciprocally convex approach, as well as a few effective free-weighting matrices are employed to achieve less conservative stability conditions. Comprehensive benchmarking numerical examples and simulation studies demonstrate the effectiveness of the proposed stability criteria with respect to some recently published results. The efficacy of the modern integral inequalities are also emphasized against the conventional Jensen׳s inequalities.
Wirtinger-based integral inequality: Application to time-delay systems In the last decade, the Jensen inequality has been intensively used in the context of time-delay or sampled-data systems since it is an appropriate tool to derive tractable stability conditions expressed in terms of linear matrix inequalities (LMIs). However, it is also well-known that this inequality introduces an undesirable conservatism in the stability conditions and looking at the literature, reducing this gap is a relevant issue and always an open problem. In this paper, we propose an alternative inequality based on the Fourier Theory, more precisely on the Wirtinger inequalities. It is shown that this resulting inequality encompasses the Jensen one and also leads to tractable LMI conditions. In order to illustrate the potential gain of employing this new inequality with respect to the Jensen one, two applications on time-delay and sampled-data stability analysis are provided.
Summation Inequalities to Bounded Real Lemmas of Discrete-Time Systems With Time-Varying Delay. Summation inequality is an important technique for analysis of discrete-time systems with a time-varying delay. It seems that from the literature a tighter inequality usually leads to a less conservative criterion. Based on H performance analysis problem, this note presents different findings on the relationship between the conservatism of bounded real lemma (BRL) and the tightness of summation inequality. Firstly, the BRL obtained by the Wirtinger-based inequality (WBI) is not always less conservative than the one by the Jensen-based inequality although the WBI is tighter. Secondly, the WBI is tighter than a general free-matrix-based inequality (GFMBI) developed in this note, while the BRL obtained via the GFMBI is less conservative than the WBI-based BRL. Finally, a numerical example is given to demonstrate those findings.
Consensus-based algorithms for distributed filtering The paper addresses Distributed State Estimation (DSE) over sensor networks. Two existing consensus approaches for DSE of linear systems, named consensus on information (CI) and consensus on measurements (CM), are extended to nonlinear systems. Further, a novel hybrid consensus approach exploiting both CM and CI (named HCMCI=Hybrid CM + CI) is introduced in order to combine their complementary benefits. Novel theoretical results, limitedly to linear systems, on the guaranteed stability of the HCMCI filter under minimal requirements (i.e. collective observability and network connectivity) are proved. Finally, a simulation case-study is presented in order to comparatively show the effectiveness of the proposed consensus-based state estimators.
Conjunction as composition Partial specifications written in many different specification languages can be composed if they are all given semantics in the same domain, or alternatively, all translated into a common style of predicate logic. The common semantic domain must be very general, the particular semantics assigned to each specification language must be conducive to composition, and there must be some means of communication that enables specifications to build on one another. The criteria for success are that a wide variety of specification languages should be accommodated, there should be no restrictions on where boundaries between languages can be placed, and intuitive expectations of the specifier should be met.
STeP: Deductive-Algorithmic Verification of Reactive and Real-Time Systems . The Stanford Temporal Prover, STeP, combines deductivemethods with algorithmic techniques to verify linear-time temporal logicspecifications of reactive and real-time systems. STeP uses verificationrules, verification diagrams, automatically generated invariants, modelchecking, and a collection of decision procedures to verify finiteandinfinite-state systems.System Description: The Stanford Temporal Prover, STeP, supports thecomputer-aided formal verification of reactive, real-time...
Object Interaction in Object-Oriented Deductive Conceptual Models We present the main components of an object-oriented deductive approach to conceptual modelling of information systems. This approach does not model object interaction explicitly. However interaction among objects can be derived by means of a formal procedure that we outline.
Communicating with Synchronized Environments In the modern design environments, different modules, available in existent libraries, may obey different architectural styles and execution models. Reaching a well-behaved composition of such modules is a very important task of the system designer. In the framework of the action systems formalism, we analyze the co-existence of two models of execution, one synchronized, the other, interleaved. We devise a communication scheme, similar to the classical paradigm of polling, which allows us to model synchronized components that correctly exchange information, within the borders of a global system, with their non-synchronized partners. Derivations of such mechanisms follow specific correctness rules for refinement. We illustrate our methods on an audio system example, implementable as either a software or a hardware device
1.005147
0.005627
0.005304
0.005019
0.003795
0.002381
0.001587
0.000392
0.000072
0.000001
0
0
0
0
Low-complexity predictive lossy compression of hyperspectral and ultraspectral images Lossy compression of hyperspectral and ultraspectral images is traditionally performed using 3D transform coding. This approach yields good performance, but its complexity and memory requirements are unsuitable for onboard compression. In this paper we propose a low-complexity lossy compression scheme based on prediction, uniform threshold quantization, and rate-distortion optimization. Its performance is competitive with that of state-of-the-art 3D transform coding schemes, but the complexity is immensely lower. The algorithm is able to limit the scope of errors, and is amenable to parallel implementation, making it suitable for onboard compression at high throughputs.
Near lossless compression of hyperspectral images based on distributed source coding Effective compression technique of on-board hyperspectral images has been an active topic in the field of hyperspectral remote sensintg.In order to solve the effective compression of on-board hyperspectral images,a new distributed near lossless compression algorithm based on multilevel coset codes is proposed.Due to the diverse importance of each band,a new adaptive rate allocation algorithm is proposed,which allocates rational rate for each band according to the size of weight factor defined for hyperspectral images subject to the target rate constraints.Multiband prediction is introduced for Slepian-Wolf lossless coding and an optimal quantization algorithm is presented under the correct reconstruction of Slepian-Wolf decoder,which minimizes the distortion of reconstructed hyperspectral images under the target rate.Then Slepian-Wolf encoder exploits the correlation of the quantized values to generate the final bit streams.Experimental results show that the proposed algorithm has both higher compression efficiency and lower encoder complexity than several existing classical algorithms.
Progressive distributed coding of multispectral images We present in this paper a novel distributed coding scheme for lossless and progressive compression of multispectral images. The main strategy of this new scheme is to explore data redundancies at the decoder in order to design a lightweight yet very efficient encoder suitable for onboard applications during acquisition of multispectral image. A sequence of increasing resolution layers is encoded and transmitted successively until the original image can be losslessly reconstructed from all layers. We assume that the decoder with abundant resources is able to perform adaptive region-based predictor estimation to capture spatially varying spectral correlation with the knowledge of lower-resolution layers, thus generate high quality side information for decoding the higher-resolution layer. Progressive transmission enables the spectral correlation to be refined successively, resulting in gradually improved decoding performance of higher-resolution layers as more data are decoded. Simulations have been carried out to demonstrate that the proposed scheme, with innovative combination of low complexity encoding, lossless compression and progressive coding, can achieve competitive performance comparing with high complexity state-of-the-art 3-D DPCM technique.
Distributed source coding techniques for lossless compression of hyperspectral images This paper deals with the application of distributed source coding (DSC) theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.
Lossless Hyperspectral-Image Compression Using Context-Based Conditional Average In this paper, a new algorithm for lossless compression of hyperspectral images is proposed. The spectral redundancy in hyperspectral images is exploited using a context-match method driven by the correlation between adjacent bands. This method is suitable for hyperspectral images in the band-sequential format. Moreover, this method compares favorably with the recent proposed lossless compression algorithms in terms of compression, with a lower complexity.
Low Complexity, High Efficiency Probability Model for Hyper-spectral Image Coding This paper describes a low-complexity, high-efficiency lossy-to-lossless coding scheme for hyper-spectral images. Together with only a 2D wavelet transform on individual image components, the proposed scheme achieves coding performance similar to that achieved by a 3D transform strategy that adds one level of wavelet decomposition along the depth axis of the volume. The proposed schemes operates by means of a probability model for symbols emitted by the bit plane coding engine. This probability model captures the statistical behavior of hyper-spectral images with high precision. The proposed method is implemented in the core coding system of JPEG2000 reducing computational costs by 25%.
A Block-Based Inter-Band Lossless Hyperspectral Image Compressor We propose a hyperspectral image compressor called BH which considers its input image as being partitioned into square blocks, each lying entirely within a particular band, and compresses one such block at a time by using the following steps: first predict the block from the corresponding block in the previous band, then select a predesigned code based on the prediction errors, and nally encode the predictor coeffcient and errors. Apart from giving good compression rates and being fast, BH can provide random access to spatial locations in the image. We hypothesize that BH works well because it accommodates the rapidly changing image brightness that often occurs in hyperspectral images. We also propose an intraband compressor called LM which is worse than BH, but whose performance helps explain BH's performance.
Partitioned vector quantization: application to lossless compression of hyperspectral images A novel design for a vector quantizer that uses multiple codebooks of variable dimensionality is proposed. High dimensional source vectors are first partitioned into two or more subvectors of (possibly) different length and then, each subvector is individually encoded with an appropriate codebook. Further redundancy is exploited by conditional entropy coding of the subvectors indices. This scheme allows practical quantization of high dimensional vectors in which each vector component is allowed to have different alphabet and distribution. This is typically the case of the pixels representing a hyperspectral image. We present experimental results in the lossless and near-lossless encoding of such images. The method can be easily adapted to lossy coding.
Sources Which Maximize the Choice of a Huffman Coding Tree
Conception, evolution, and application of functional programming languages The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda1 and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.
Developing interactive information systems with the User Software Engineering methodology User Software Engineering is a methodology, supported by automated tools, for the systematic development of interactive information systems. The USE methodology gives particular attention to effective user involvement in the early stages of the software development process, concentrating on external design and the use of rapidly created and modified prototypes of the user interface. The USE methodology is supported by an integrated set of graphically based tools. This paper describes the User Software Engineering methodology and the tools that support the methodology.
Viewpoint Consistency in Z and LOTOS: A Case Study . Specification by viewpoints is advocated as a suitable methodof specifying complex systems. Each viewpoint describes the envisagedsystem from a particular perspective, using concepts and specificationlanguages best suited for that perspective.Inherent in any viewpoint approach is the need to check or manage theconsistency of viewpoints and to show that the different viewpoints donot impose contradictory requirements. In previous work we have describeda range of techniques for...
TAER: time-aware entity retrieval-exploiting the past to find relevant entities in news articles Retrieving entities instead of just documents has become an important task for search engines. In this paper we study entity retrieval for news applications, and in particular the importance of the news trail history (i.e., past related articles) in determining the relevant entities in current articles. This is an important problem in applications that display retrieved entities to the user, together with the news article. We analyze and discuss some statistics about entities in news trails, unveiling some unknown findings such as the persistence of relevance over time. We focus on the task of query dependent entity retrieval over time. For this task we evaluate several features, and show that their combinations significantly improves performance.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.102297
0.054697
0.052297
0.010734
0.003881
0.001202
0.000247
0.000097
0.000007
0
0
0
0
0
An Analytical Solution for Probabilistic Guarantees of Reservation Based Soft Real--Time Systems We show a methodology for the computation of the probability of deadline miss for a periodic real–time task scheduled by a resource reservation algorithm. We propose a modelling technique for the system that reduces the computation of such a probability to that of the steady state probability of an infinite state Discrete Time Markov Chain with a periodic structure. This structure is exploited to develop an efficient numeric solution where different accuracy/computation time trade–offs can be obtained by operating on the granularity of the model. More importantly we offer a closed form conservative bound for the probability of a deadline miss. Our experiments reveal that the bound remains reasonably close to the experimental probability in one real–time application of practical interest. When this bound is used for the optimisation of the overall Quality of Service for a set of tasks sharing the CPU, it produces a good sub–optimal solution in a small amount of time.
Efficient and robust probabilistic guarantees for real-time tasks This paper presents a new method for providing probabilistic real-time guarantees to tasks scheduled through resource reservations. Previous work on probabilistic analysis of reservation-based schedulers is extended by improving the efficiency and robustness of the probability computation. Robustness is improved by accounting for a possibly incomplete knowledge of the distribution of the computation times (which is typical in realistic applications). The proposed approach computes a conservative bound for the probability of missing deadlines, based on the knowledge of the probability distributions of the execution times and of the inter-arrival times of the tasks. In this paper, such a bound is computed in realistic situations, comparing it with simulative results and with the exact computation of deadline miss probabilities (without pessimistic bounds). Finally, the impact of the incomplete knowledge of the execution times distribution is evaluated.
A Probabilistic Calculus for Probabilistic Real-Time Systems Challenges within real-time research are mostly in terms of modeling and analyzing the complexity of actual real-time embedded systems. Probabilities are effective in both modeling and analyzing embedded systems by increasing the amount of information for the description of elements composing the system. Elements are tasks and applications that need resources, schedulers that execute tasks, and resource provisioning that satisfies the resource demand. In this work, we present a model that considers component-based real-time systems with component interfaces able to abstract both the functional and nonfunctional requirements of components and the system. Our model faces probabilities and probabilistic real-time systems unifying in the same framework probabilistic scheduling techniques and compositional guarantees varying from soft to hard real time. We provide an algebra to work with the probabilistic notation developed and form an analysis in terms of sufficient probabilistic schedulability conditions for task systems with either preemptive fixed-priority or earliest deadline first scheduling paradigms.
An Analytical Bound for Probabilistic Deadlines The application of a resource reservation scheduler to soft real -- time systems requires effective means to compute the probability of a deadline miss given a particular choice for the scheduling parameters. This is a challenging research problem, for which only numeric solutions, complex and difficult to manage, are currently available. In this paper, we adopt an analytical approach. By using an approximate and conservative model for the evolution of a periodic task scheduled through a reservation, we construct a closed form lower bound for the probability of a deadline miss. Our experiments reveal that the bound remains reasonably close to the experimental probability for many real -- time applications of interest.
A framework for the response time analysis of fixed-priority tasks with stochastic inter-arrival times Real-time scheduling usually considers worst-case values for the parameters of task (or message stream) sets, in order to provide safe schedulability tests for hard real-time systems. However, worst-case conditions introduce a level of pessimism that is often inadequate for a certain class of (soft) real-time systems. In this paper we provide an approach for computing the stochastic response time of tasks where tasks have inter-arrival times described by discrete probabilistic distribution functions, instead of minimum inter-arrival (MIT) values.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research.
Petri nets: Properties, analysis and applications Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets
Performance evaluation in content-based image retrieval: overview and proposals Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as defining a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented.
Workflow Modeling A discussion of workflow models and process description languages is presented. The relationshipbetween data, function and coordination aspects of the process is discussed, and a claim is made thatmore than one model view (or representation) is needed in order to grasp the complexity of processmodeling.The basis of a new model is proposed, showing that more expressive models can be built by supportingasynchronous events and batch activities, matched by powerfull run-time support.1...
The Conical Methodology and the evolution of simulation model development Originating with ideas generated in the mid-1970s, the Conical Methodology (CM) is the oldest procedural approach to simulation model development. This evolutionary overview describes the principles underlying the CM, the environment structured according to these principles, and the capabilities for large complex simulation modeling tasks not provided in textbook descriptions. The CM is an object-oriented, hierarchical specification language that iteratively prescribes object attributes in a definitional phase that is topdown, followed by a specification phase that is bottom-up. The intent is to develop successive model representations at various levels of abstraction that can be diagnosed for correctness, completeness, consistency, and other characteristics prior to implementation as an executable program. Related or competitive approaches, throughout the evolutionary period are categorized as emanating from: artificial intelligence, mathematical programming, software engineering, conceptual modeling, systems theory, logic-based theory, or graph theory. Work in each category is briefly described.
Visual Query Systems for Databases: A Survey Visual query systems (VQSs) are query systems for databases that use visual representations to depict the domain of interest and express related requests. VQSs can be seen as an evolution of query languages adopted into database management systems; they are designed to improve the effectiveness of the human–computer communication. Thus, their most important features are those that determine the nature of the human–computer dialogue. In order to survey and compare existing VQSs used for querying traditional databases, we first introduce a classification based on such features, namely the adopted visual representations and the interaction strategies. We then identify several user types and match the VQS classes against them, in order to understand which kind of system may be suitable for each kind of user. We also report usability experiments which support our claims. Finally, some of the most important open problems in the VQS area are described.
Matching conceptual graphs as an aid to requirements re-use The types of knowledge used during requirements acquisition are identified and a tool to aid in this process, ReqColl (Requirements Collector) is introduced. The tool uses conceptual graphs to represent domain concepts and attempts to recognise new concepts through the use of a matching facility. The overall approach to requirements capture is first described and the approach to matching illustrated informally. The detailed procedure for matching conceptual graphs is then given. Finally ReqColl is compared to similar work elsewhere and some future research directions indicated.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.1
0.1
0.1
0.04
0.0125
0
0
0
0
0
0
0
0
0
Extending ERS for Modelling Dynamic Workflows in Event-B Event-B is a state-based formal method for modelling and verifying the consistency of discrete systems. Event refinement structures (ERS) augment Event-B with hierarchical diagrams, providing explicit support for workflows and refinement relationships. Despite the variety of ERS combinators, ERS still lacks the flexibility to model dynamic workflows that support dynamic changes in the degree of concurrency. Specifically in the cases where the degree of parallelism is data dependent and data values can change during execution. In this paper, we propose two types of extensions in ERS to support dynamic modelling using Event-B. The first extension is supporting data-dependent workflows where data changes are possible. The second extension improves ERS by providing exception handling support. Semantics are given to an ERS diagram by generating an Event-B model from it. We demonstrate the Event-B encodings of the proposed ERS extensions by modelling a concurrent emergency dispatch case study.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
One-Class Learning for Human-Robot Interaction A Suitable learning and classification mechanism is a crucial premise for Human-Robot Interaction. To this purpose, several one-class classification methods have been investigated using wavelet features (parameters of Hidden Markov Tree model) in this paper. Only target class patterns are used to train class models. Good discrimination over outlier (never seen non-target) patterns is still kept based on their distances to class model. Face and nonface classification is used as an example and some promising results are reported.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Unified Lossy and Near-Lossless Hyperspectral Image Compression Based on JPEG 2000 We propose a compression algorithm for hyperspectral images featuring both lossy and near- lossless compression. The algorithm is based on JPEG 2000, and provides better near-lossless compression performance than existing schemes. We also show that its effect on the results of selected applications is negligible.
Near-Lossless Compression of Hyperspectral Images Algorithms for near-lossless compression of hyperspec- tral images are presented. They guarantee that the intensity of any pixel in the decompressed image(s) differs from its origi- nal value by no more than a user-specified quantity. To reduce the bit rate required to code images while providing signifi- cantly more compression than lossless algorithms, linear pre- diction between the bands is used. Each band is predicted by a previously transmitted band. The prediction is subtracted from the original band, and the residual is compressed with a bit plane coder which uses context-based adaptive binary arithmetic coding. To find the best prediction algorithm, the impact of various band orderings and optimization techniques on the compression ratios is studied.
Stationary probability model for bitplane image coding through local average of wavelet coefficients. This paper introduces a probability model for symbols emitted by bitplane image coding engines, which is conceived from a precise characterization of the signal produced by a wavelet transform. Main insights behind the proposed model are the estimation of the magnitude of wavelet coefficients as the arithmetic mean of its neighbors' magnitude (the so-called local average), and the assumption that emitted bits are under-complete representations of the underlying signal. The local average-based probability model is introduced in the framework of JPEG2000. While the resulting system is not JPEG2000 compatible, it preserves all features of the standard. Practical benefits of our model are enhanced coding efficiency, more opportunities for parallelism, and improved spatial scalability.
Hyperspectral image compression using entropy-constrained predictive trellis coded quantization A training-sequence-based entropy-constrained predictive trellis coded quantization (ECPTCQ) scheme is presented for encoding autoregressive sources. For encoding a first-order Gauss-Markov source, the mean squared error (MSE) performance of an eight-state ECPTCQ system exceeds that of entropy-constrained differential pulse code modulation (ECDPCM) by up to 1.0 dB. In addition, a hyperspectral image compression system is developed, which utilizes ECPTCQ. A hyperspectral image sequence compressed at 0.125 b/pixel/band retains an average peak signal-to-noise ratio (PSNR) of greater than 43 dB over the spectral bands
A Near Lossless Wavelet-Based Compression Scheme for Satellite Images In this paper, a near lossless image compression algorithm is presented for high quality satellite image compression. The proposed algorithm makes use of the recommendation for image data compression from the Consultative Committee for Space Data Systems (CCSDS) and specific residue image bit-plane compensation. Comparing with the recommendation for satellite image compression from CCSDS, the proposed algorithm can reconstruct near lossless images with less bit rate than the recommendation of CCSDS does. Benefited from run-length coding and specific residue image bit-plane compensation, the proposed algorithm can obtain higher quality satellite image at similar bit rate or lower bit rate at the similar image quality. These results are valuable for reducing transmission time of high quality satellite image data. This work can be further improved by combining other binary compression techniques and the extension of this work may offer a VLSI or a DSP implementation of the proposed algorithm. Satellite image transmission and storage system can benefit by the proposed algorithm.
Near-lossless image compression by relaxation-labelled prediction This paper describes a differential pulse code modulation scheme suitable for lossless and near-lossless compression of monochrome still images. The proposed method is based on a classified linear-regression prediction followed by context-based arithmetic coding of the outcome residuals. Images are partitioned into blocks, typically 8 × 8, and a minimum mean square error linear predictor is calculated for each block. Given a preset number of classes, a clustering algorithm produces an initial guess of as many predictors to be fed to an iterative labelling procedure that classifies pixel blocks simultaneously refining the associated predictors. The final set of predictors is encoded, together with the labels identifying the class, and hence the predictor, to which each block belongs. A thorough performance comparison, both lossless and near-lossless, with advanced methods from the literature and both current and upcoming standards highlights the advantages of the proposed approach. The method provides impressive performances, especially on medical images. Coding time are affordable thanks to fast convergence of training and easy balance between compression and computation by varying the system's parameters. Decoding is always real-time thanks to the absence of training.
Performance Evaluation of the H.264/AVC Video Coding Standard for Lossy Hyperspectral Image Compression In this paper, a performance evaluation of the state-of-the-art H.264/AVC video coding standard is carried out with the aim of determining its feasibility when applied to hyperspectral image compression. Results are obtained based on configuring diverse parameters in the encoder in order to achieve an optimal trade-off between compression ratio, accuracy of unmixing and computation time. In this s...
Low-complexity predictive lossy compression of hyperspectral and ultraspectral images Lossy compression of hyperspectral and ultraspectral images is traditionally performed using 3D transform coding. This approach yields good performance, but its complexity and memory requirements are unsuitable for onboard compression. In this paper we propose a low-complexity lossy compression scheme based on prediction, uniform threshold quantization, and rate-distortion optimization. Its performance is competitive with that of state-of-the-art 3D transform coding schemes, but the complexity is immensely lower. The algorithm is able to limit the scope of errors, and is amenable to parallel implementation, making it suitable for onboard compression at high throughputs.
Fuzzy logic-based matching pursuits for lossless predictive coding of still images This paper presents an application of fuzzy-logic techniques to the reversible compression of grayscale images. With reference to a spatial differential pulse code modulation (DPCM) scheme, prediction may be accomplished in a space-varying fashion either as adaptive, i.e., with predictors recalculated at each pixel, or as classified, in which image blocks or pixels are labeled in a number of classes, for which fitting predictors are calculated. Here, an original tradeoff is proposed; a space-varying linear-regression prediction is obtained through fuzzy-logic techniques as a problem of matching pursuit, in which a predictor different for every pixel is obtained as an expansion in series of a finite number of prototype nonorthogonal predictors, that are calculated in a fuzzy fashion as well. To enhance entropy coding, the spatial prediction is followed by context-based statistical modeling of prediction errors. A thorough comparison with the most advanced methods in the literature, as well as an investigation of performance trends and computing times to work parameters, highlight the advantages of the proposed fuzzy approach to data compression.
I-structures: data structures for parallel computing It is difficult to achieve elegance, efficiency, and parallelism simultaneously in functional programs that manipulate large data structures. We demonstrate this through careful analysis of program examples using three common functional data-structuring approaches-lists using Cons, arrays using Update (both fine-grained operators), and arrays using make-array (a “bulk” operator). We then present I-structure as an alternative and show elegant, efficient, and parallel solutions for the program examples in Id, a language with I-structures. The parallelism in Id is made precise by means of an operational semantics for Id as a parallel reduction system. I-structures make the language nonfunctional, but do not lose determinacy. Finally, we show that even in the context of purely functional languages, I-structures are invaluable for implementing functional data abstractions.
Visual support for reengineering work processes
Rank and relevance in novelty and diversity metrics for recommender systems The Recommender Systems community is paying increasing attention to novelty and diversity as key qualities beyond accuracy in real recommendation scenarios. Despite the raise of interest and work on the topic in recent years, we find that a clear common methodological and conceptual ground for the evaluation of these dimensions is still to be consolidated. Different evaluation metrics have been reported in the literature but the precise relation, distinction or equivalence between them has not been explicitly studied. Furthermore, the metrics reported so far miss important properties such as taking into consideration the ranking of recommended items, or whether items are relevant or not, when assessing the novelty and diversity of recommendations. We present a formal framework for the definition of novelty and diversity metrics that unifies and generalizes several state of the art metrics. We identify three essential ground concepts at the roots of novelty and diversity: choice, discovery and relevance, upon which the framework is built. Item rank and relevance are introduced through a probabilistic recommendation browsing model, building upon the same three basic concepts. Based on the combination of ground elements, and the assumptions of the browsing model, different metrics and variants unfold. We report experimental observations which validate and illustrate the properties of the proposed metrics.
Architecture of the Symbolics 3600 No abstract available.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.024079
0.028571
0.016966
0.009527
0.007522
0.00525
0.002392
0.000438
0.000045
0
0
0
0
0
Numerically efficient probabilistic guarantees for resource reservations This paper presents an efficient algorithm for providing probabilistic guarantees in soft real-time systems using resource reservations. We use a conservative model for the temporal evolution of a resource reservation, which has a particular structure - a quasi birth death process - enabling an efficient computation of the stationary probability of respecting deadlines. We show the accuracy and the efficiency of the method in a large set of experiments.
An Analytical Solution for Probabilistic Guarantees of Reservation Based Soft Real--Time Systems We show a methodology for the computation of the probability of deadline miss for a periodic real–time task scheduled by a resource reservation algorithm. We propose a modelling technique for the system that reduces the computation of such a probability to that of the steady state probability of an infinite state Discrete Time Markov Chain with a periodic structure. This structure is exploited to develop an efficient numeric solution where different accuracy/computation time trade–offs can be obtained by operating on the granularity of the model. More importantly we offer a closed form conservative bound for the probability of a deadline miss. Our experiments reveal that the bound remains reasonably close to the experimental probability in one real–time application of practical interest. When this bound is used for the optimisation of the overall Quality of Service for a set of tasks sharing the CPU, it produces a good sub–optimal solution in a small amount of time.
Efficient and robust probabilistic guarantees for real-time tasks This paper presents a new method for providing probabilistic real-time guarantees to tasks scheduled through resource reservations. Previous work on probabilistic analysis of reservation-based schedulers is extended by improving the efficiency and robustness of the probability computation. Robustness is improved by accounting for a possibly incomplete knowledge of the distribution of the computation times (which is typical in realistic applications). The proposed approach computes a conservative bound for the probability of missing deadlines, based on the knowledge of the probability distributions of the execution times and of the inter-arrival times of the tasks. In this paper, such a bound is computed in realistic situations, comparing it with simulative results and with the exact computation of deadline miss probabilities (without pessimistic bounds). Finally, the impact of the incomplete knowledge of the execution times distribution is evaluated.
An Analytical Bound for Probabilistic Deadlines The application of a resource reservation scheduler to soft real -- time systems requires effective means to compute the probability of a deadline miss given a particular choice for the scheduling parameters. This is a challenging research problem, for which only numeric solutions, complex and difficult to manage, are currently available. In this paper, we adopt an analytical approach. By using an approximate and conservative model for the evolution of a periodic task scheduled through a reservation, we construct a closed form lower bound for the probability of a deadline miss. Our experiments reveal that the bound remains reasonably close to the experimental probability for many real -- time applications of interest.
A framework for the response time analysis of fixed-priority tasks with stochastic inter-arrival times Real-time scheduling usually considers worst-case values for the parameters of task (or message stream) sets, in order to provide safe schedulability tests for hard real-time systems. However, worst-case conditions introduce a level of pessimism that is often inadequate for a certain class of (soft) real-time systems. In this paper we provide an approach for computing the stochastic response time of tasks where tasks have inter-arrival times described by discrete probabilistic distribution functions, instead of minimum inter-arrival (MIT) values.
Formal Derivation of Strongly Correct Concurrent Programs. Summary  A method is described for deriving concurrent programs which are consistent with the problem specifications and free from deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary variables is also given. The applicability of the techniques presented is discussed through various examples; their use for verification purposes is illustrated as well.
Hypertext: An Introduction and Survey First Page of the Article
A field study of the software design process for large systems The problems of designing large software systems were studied through interviewing personnel from 17 large projects. A layered behavioral model is used to analyze how three of these problems—the thin spread of application domain knowledge, fluctuating and conflicting requirements, and communication bottlenecks and breakdowns—affected software productivity and quality through their impact on cognitive, social, and organizational processes.
Four dark corners of requirements engineering Research in requirements engineering has produced an extensive body of knowledge, but there are four areas in which the foundation of the discipline seems weak or obscure. This article shines some light in the "four dark corners," exposing problems and proposing solutions. We show that all descriptions involved in requirements engineering should be descriptions of the environment. We show that certain control information is necessary for sound requirements engineering, and we explain the close association between domain knowledge and refinement of requirements. Together these conclusions explain the precise nature of requirements, specifications, and domain knowledge, as well as the precise nature of the relationships among them. They establish minimum standards for what information should be represented in a requirements language. They also make it possible to determine exactly what it means for requirements engineering to be successfully completed. Categories and Subject Descriptors: D.2.1 (Software Engineering): Requirements/Specifica- tions—methodologies
Drawing graphs nicely using simulated annealing The paradigm of simulated annealing is applied to the problem of drawing graphs “nicely.” Our algorithm deals with general undirected graphs with straight-line edges, and employs several simple criteria for the aesthetic quality of the result. The algorithm is flexible, in that the relative weights of the criteria can be changed. For graphs of modest size it produces good results, competitive with those produced by other methods, notably, the “spring method” and its variants.
Animating TLA Specifications TLA (the Temporal Logic of Actions) is a linear temporal logic for specifying and reasoning about reactive systems. We define a subset of TLA whose formulas are amenable to validation by animation, with the intent to fa- cilitate the communication between domain and solution experts in the design of reactive systems. The Temporal Logic of Actions (TLA) has been proposed by Lamport (21) for the specification and verification of reactive and concurrent sy stems. TLA models describe infinite sequences of states, called behaviors, that corres pond to the execution of the system being specified. System specifications in TLA are usua lly written in a canonical form, which consists of specifying the initial states, the p ossible moves of the system, and supplementary fairness properties. Because such specifications are akin to the de- scriptions of automata and often have a strongly operational flavor, it is tempting to take such a formula and "let it run". In this paper, we define an inte rpreter algorithm for a suitable subset of TLA. The interpreter generates (finite) r uns of the system described by the specification, which can thus be validated by the user. For reasons of complexity, it is impossible to animate an arb itrary first-order TLA specification; even the satisfiability problem for that logi c is -complete. Our restric- tions concern the syntactic form of specifications, which en sure that finite models can be generated incrementally. They do not constrain the domains of system variables or restrict the non-determinism inherent in a specification, w hich is important in the realm of reactive systems. In contrast, model checking techniques allow to exhaustively analyse the (infinite) runs of finite-state systems. It is generally agreed that the development of reactive sys- tems benefits from the use of both animation for the initial mo delling phase, comple- mented by model checking of system abstractions for the verification of crucial system components. The organization of the paper is as follows: in sections 2 and 3 we discuss the overall role of animation for system development, illustra ting its purpose at the hand of a simple example, and discuss executable temporal logics. Section 4 constitutes the main body of this paper; we there define the syntax and semanti cs of an executable This work was partly supported by a grant from DAAD and APAPE under the PROCOPE program.
UTP and Sustainability Hoare and He’s approach to unifying theories of programming, UTP, is a dozen years old. In spite of the importance of its ideas, UTP does not seem to be attracting due interest. The purpose of this article is to discuss why that is the case, and to consider UTP’s destiny. To do so it analyses the nature of UTP, focusing primarily on unification, and makes suggestions to expand its use.
Verifying task-based specifications in conceptual graphs A conceptual model is a model of real world concepts and application domains as perceived by users and developers. It helps developers investigate and represent the semantics of the problem domain, as well as communicate among themselves and with users. In this paper, we propose the use of task-based specifications in conceptual graphs (TBCG) to construct and verify a conceptual model. Task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in the conceptual model; whereas conceptual graphs are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of verification. Verifying a conceptual model is performed on model specifications of a task through constraints satisfaction and relaxation techniques, and on process specifications of the task based on operators and rules of inference inherited in conceptual graphs.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.052775
0.05
0.05
0.032222
0.014456
0
0
0
0
0
0
0
0
0
Compression of Hyperspectral Images Using Discerete Wavelet Transform and Tucker Decomposition The compression of hyperspectral images (HSIs) has recently become a very attractive issue for remote sensing applications because of their volumetric data. In this paper, an efficient method for hyperspectral image compression is presented. The proposed algorithm, based on Discrete Wavelet Transform and Tucker Decomposition (DWT-TD), exploits both the spectral and the spatial information in the images. The core idea behind our proposed technique is to apply TD on the DWT coefficients of spectral bands of HSIs. We use DWT to effectively separate HSIs into different sub-images and TD to efficiently compact the energy of sub-images. We evaluate the effect of the proposed method on real HSIs and also compare the results with the well-known compression methods. The obtained results show a better performance of the proposed method. Moreover, we show the impact of compression HSIs on the supervised classification and linear unmixing.
A New On-Board Image Codec Based on Binary Tree With Adaptive Scanning Order in Scan-Based Mode Remote sensing images offer a large amount of information but require on-board compression because of the storage and transmission constraints of on-board equipment. JPEG2000 is too complex to become a recommended standard for the mission, and CCSDS-IDC fixes most of the parameters and only provides quality scalability. In this paper, we present a new, low-complexity, low-memory, and efficient embedded wavelet image codec for on-board compression. First, we propose the binary tree as a novel and robust way of coding remote sensing image in wavelet domain. Second, we develop an adaptive scanning order to traverse the binary tree level by level from the bottom to the top, so that better performance and visual effect are attained. Last, the proposed method is processed with a scan-based mode, which significantly reduces the memory requirement. The proposed method is very fast because it does not use any entropy coding and rate-distortion optimization, while it provides quality, position, and resolution scalability. Being less complex, it is very easy to implement in hardware and very suitable for on-board compression. Experimental results show that the proposed method can significantly improve peak signal-to-noise ratio compared with SPIHT without arithmetic coding and scan-based CCSDS-IDC, and is similar to scan-based JPEG2000.
Multitemporal Hyperspectral Image Compression. The compression of multitemporal hyperspectral imagery is considered, wherein the encoder uses a reference image to effectuate temporal decorrelation for the coding of the current image. Both linear prediction and a spectral concatenation of images are explored to this end. Experimental results demonstrate that, when there are few changes between two images, the gain in rate-distortion performance...
Compression of hyperspectral remote sensing images by tensor approach. Whereas the transform coding algorithms have been proved to be efficient and practical for grey-level and color images compression, they could not directly deal with the hyperspectral images (HSI) by simultaneously considering both the spatial and spectral domains of the data cube. The aim of this paper is to present an HSI compression and reconstruction method based on the multi-dimensional or tensor data processing approach. By representing the observed hyperspectral image cube to a 3-order-tensor, we introduce a tensor decomposition technology to approximately decompose the original tensor data into a core tensor multiplied by a factor matrix along each mode. Thus, the HSI is compressed to the core tensor and could be reconstructed by the multi-linear projection via the factor matrices. Experimental results on particular applications of hyperspectral remote sensing images such as unmixing and detection suggest that the reconstructed data by the proposed approach significantly preserves the HSI׳s data quality in several aspects.
Low-Complexity Compression Method for Hyperspectral Images Based on Distributed Source Coding. In this letter, we propose a low-complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme for hyperspectral images. First, the DCT was applied to the hyperspectral images. Then, set-partitioning-based approach was utilized to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. Third, low-density pa...
Lossy-to-Lossless Hyperspectral Image Compression Based on Multiplierless Reversible Integer TDLT/KLT We proposed a new transform scheme of multiplierless reversible time-domain lapped transform and Karhunen-Loeve transform (RTDLT/KLT) for lossy-to-lossless hyperspectral image compression. Instead of applying discrete wavelet transform (DWT) in the spatial domain, RTDLT is applied for decorrelation. RTDLT can be achieved by existing discrete cosine transform and pre- and postfilters, while the rev...
Clustered dpcm for the lossless compression of hyperspectral images A clustered differential pulse code modulation lossless compression method for hyperspectral images is presented. The spectra of a hyperspectral image is clustered, and an optimized predictor is calculated for each cluster. Prediction is performed using a linear predictor. After prediction, the difference between the predicted and original values is computed. The difference is entropy-coded using ...
Fast multiplierless approximations of the DCT with the lifting scheme We present the design, implementation, and application of several families of fast multiplierless approximations of the discrete cosine transform (DCT) with the lifting scheme called the binDCT. These binDCT families are derived from Chen's (1977) and Loeffler's (1989) plane rotation-based factorizations of the DCT matrix, respectively, and the design approach can also be applied to a DCT of arbitrary size. Two design approaches are presented. In the first method, an optimization program is defined, and the multiplierless transform is obtained by approximating its solution with dyadic values. In the second method, a general lifting-based scaled DCT structure is obtained, and the analytical values of all lifting parameters are derived, enabling dyadic approximations with different accuracies. Therefore, the binDCT can be tuned to cover the gap between the Walsh-Hadamard transform and the DCT. The corresponding two-dimensional (2-D) binDCT allows a 16-bit implementation, enables lossless compression, and maintains satisfactory compatibility with the floating-point DCT. The performance of the binDCT in JPEG, H.263+, and lossless compression is also demonstrated
Generalized kraft inequality and arithmetic coding Algorithms for encoding and decoding finite strings over a finite alphabet are described. The coding operations are arithmetic involving rational numbers li as parameters such that ∑i2−li≤2−ε. This coding technique requires no blocking, and the per-symbol length of the encoded string approaches the associated entropy within ε. The coding speed is comparable to that of conventional coding methods.
A scheme for robust distributed sensor fusion based on average consensus We consider a network of distributed sensors, where each sensor takes a linear measurement of some unknown parameters, corrupted by independent Gaussian noises. We propose a simple distributed iterative scheme, based on distributed average consensus in the network, to compute the maximum-likelihood estimate of the parameters. This scheme doesn't involve explicit point-to-point message passing or routing; instead, it diffuses information across the network by updating each node's data with a weighted average of its neighbors' data (they maintain the same data structure). At each step, every node can compute a local weighted least-squares estimate, which converges to the global maximum-likelihood solution. This scheme is robust to unreliable communication links. We show that it works in a network with dynamically changing topology, provided that the infinitely occurring communication graphs are jointly connected.
Proceedings of the 2nd International Conference on Pragmatic Web, ICPW 2007, Tilburg, The Netherlands, October 22-23, 2007
Data refinement by miracles Data refinement is the transformation in a computer program of one data type to another. Usually, we call the original data type ‘abstract’ and the final data type ‘concrete’. The concrete data type is said to represent the abstract. In spite of recent advances, there remain obvious data refinements that are difficult to prove. We give such a refinement and present a new technique that avoids the difficulty. Our innovation is the use of program fragments that do not satisfy Dijkstra's Law of the excluded miracle. These of course can never be implemented, so they must be eliminated before the final program is reached. But, in the intermediate stages of development, they simplify the calculations.
On confusion between requirements and their representations Requirements representations are often confused with requirements. This confusion is not just widespread in practice, but it exists even in the latest requirements engineering research and theory, leading to a number of negative consequences. In this article, we discuss these negative consequences, and present a solution based on a strict distinction between requirements per se and requirements representations. We elaborate on this distinction and classify different forms of representations in a unified requirements representations ontology, including a refinement of descriptive and model-based requirements representations.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.024636
0.028148
0.026765
0.01679
0.011141
0.005038
0.001742
0.000245
0.000005
0
0
0
0
0
Proof Rules Dealing with Fairness We provide proof rules allowing to deal with two fairness assumptions in the context of Dijkstra's do-od programs. These proof rules are obtained by considering a translated version of the original program which uses random assignment x:=? and admits only fair runs. The proof rules use infinite ordinals and deal with the original programs and not their translated versions.
Specifications of Concurrently Accessed Data Our specification of the buffer illustrates how some of the requirements described in the introduction are met. The specification is concise, and it can be manipulated easily. This allowed us to derive several properties of the buffer (Appendix A) and construct a proof of buffer concatenation (Section 4). Also refinement of the specification with the eventual goal of implementation seems feasible with this scheme.
Stepwise Refinement of Distributed Systems, Models, Formalisms, Correctness, REX Workshop, Mook, The Netherlands, May 29 - June 2, 1989, Proceedings
Stepwise Removal of Virtual Channels in Distributed Algorithms A stepwise refinement method for the design of correct distributed algorithms is studied. The method frees the program designer from all the details of the target architecture of the system in early stages of the design process. The method is applied to a new aspect in the construction of distributed systems, the removal of virtual channels. We exemplify the design method by deriving a distributed algorithm. We show that the performed refinements preserve the correctness of the algorithm.
A relational notation for state transition systems A relational notation for specifying state transition systems is presented. Several refinement relations between specifications are defined. To illustrate the concepts and methods, three specifications of the alternating-bit protocol are given. The theory is applied to explain auxiliary variables. Other applications of the theory to protocol verification, composition, and conversion are discussed. The approach is compared with previously published approaches.
Statecharts: A visual formalism for complex systems Abstract. We,present,a broad,extension,of the,conventional,formalism,of state machines,and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communica- tion. These,transform,the language,of state diagrams,into a highly,structured,and,economical description,language.,Statecharts,are thus,compact,and,expressiv-small,diagrams,can,express complex,behavior-as,well,as compositional,and,modular.,When,coupled,with,the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate,here that statecharts,counter,many,of the objections,raised,against,conventional,state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts,can be used,either as a stand-alone,behavioral,description,or as part of a more,general design methodology that deals also with the system’s other aspects, such as functional decomposi- tion and,data-flow specification. We also discuss,some,practical,experience,that was,gained,over the last three,years,in applying,the statechart,formalism,to the specification,of a particularly complex,system.
Programming Concepts, Methods and Calculi, Proceedings of the IFIP TC2/WG2.1/WG2.2/WG2.3 Working Conference on Programming Concepts, Methods and Calculi (PROCOMET '94) San Miniato, Italy, 6-10 June, 1994
Incremental Specification with Joint Actions: The RPC-Memory Specification Problem Solutions to the RPC-Memory Specification Problem are developed incrementally, using an object-oriented modeling formalism with multi-object actions. Incrementality is achieved by superposition-based derivation steps that make effective use of multiple inheritance and specialization of inherited actions. Each stage models collective behaviors of objects at some level of abstraction, and the preservation of all safety properties is guaranteed in each step. The aim of the approach is to support a design methodology that combines operational intuition with formal reasoning in TLA and is suited for the use of animation tools.
A Calculus for Predicative Programming A calculus for developing programs from specifications written as predicates that describe the relationship between the initial and final state is proposed. Such specifications are well known from the specification language Z. All elements of a simple sequential programming notation are defined in terms of predicates. Hence programs form a subset of specifications. In particular, sequential composition is defined by demonic composition, non-deterministic choice by demonic disjunction, and iteration by fixed points. Laws are derived which allow proving equivalence and refinement of specifications and programs by a series of steps. The weakest precondition calculus is also included. The approach is compared to the predicative programming approach of E. Hehner and to other refinement calculi.
Automating the Transformational Development of Software This paper reports on efforts to extend the transformational implementation (TI) model of software development [1]. In particular, we describe a system that uses AI techniques to automate major portions of a transformational implementation. The work has focused on the formalization of the goals, strategies, selection rationale, and finally the transformations used by expert human developers. A system has been constructed that includes representations for each of these problem-solving components, as well as machinery for handling human-system interaction and problem-solving control. We will present the system and illustrate automation issues through two annotated examples.
A computer-aided prototyping system A description is given of an approach to rapid prototyping that uses a specification language (the Prototype-System Description Language, PSDL) integrated with a set of software tools. including an execution support system, a rewrite system, a syntax-directed editor with graphics capabilities, a software base, a design database, and a design-management system. The prototyping language lets the designer use dataflow diagrams with nonprocedural control constraints as part of the specification of a hierarchically structured prototype. The resulting description is free from programming-level details, in contrast to prototypes constructed with a programming language. The discussion covers the language and method, rewrite subsystem, design manager, software base, and execution support.<>
Recognizing contextual polarity in phrase-level sentiment analysis This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify the contextual polarity for a large subset of sentiment expressions, achieving results that are significantly better than baseline.
Fuzzy Time Series Forecasting With a Probabilistic Smoothing Hidden Markov Model Since its emergence, the study of fuzzy time series (FTS) has attracted more attention because of its ability to deal with the uncertainty and vagueness that are often inherent in real-world data resulting from inaccuracies in measurements, incomplete sets of observations, or difficulties in obtaining measurements under uncertain circumstances. The representation of fuzzy relations that are obtained from a fuzzy time series plays a key role in forecasting. Most of the works in the literature use the rule-based representation, which tends to encounter the problem of rule redundancy. A remedial forecasting model was recently proposed in which the relations were established based on the hidden Markov model (HMM). However, its forecasting performance generally deteriorates when encountering more zero probabilities owing to fewer fuzzy relationships that exist in the historical temporal data. This paper thus proposes an enhanced HMM-based forecasting model by developing a novel fuzzy smoothing method to overcome performance deterioration. To deal with uncertainty more appropriately, the roulette-wheel selection approach is applied to probabilistically determine the forecasting result. The effectiveness of the proposed model is validated through real-world forecasting experiments, and performance comparison with other benchmarks is conducted by a Monte Carlo method.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.200336
0.200336
0.100236
0.080172
0.030845
0.001633
0.000226
0.000099
0.000055
0.000009
0
0
0
0
A note on equivalence between two integral inequalities for time-delay systems. Jensen’s inequality and extended Jensen’s inequality are two important integral inequalities when problems of stability analysis and controller synthesis for time-delay systems are considered. The extended Jensen’s inequality introduces two additional free matrices and is generally regarded to be less conservative than Jensen’s inequality. The equivalence between Jensen’s inequality and extended Jensen’s inequality in bounding the quadratic term −h∫t−htẋT(s)Zẋ(s)ds in Lyapunov functional of time-delay systems is presented and theoretically proved. It is shown that the extended Jensen’s inequality does not decrease the lower bound of this quadratic term obtained using Jensen’s inequality, and then it does not reduce the conservativeness though two additional free matrices M1 and M2 are involved.
Generalized Jensen Inequalities with Application to Stability Analysis of Systems with Distributed Delays over Infinite Time-Horizons. The Jensen inequality has been recognized as a powerful tool to deal with the stability of time-delay systems. Recently, a new inequality that encompasses the Jensen inequality was proposed for the stability analysis of systems with finite delays. In this paper, we first present a generalized integral inequality and its double integral extension. It is shown how these inequalities can be applied to improve the stability result for linear continuous-time systems with gamma-distributed delays. Then, for the discrete-time counterpart we provide an extended Jensen summation inequality with infinite sequences, which leads to less conservative stability conditions for linear discrete-time systems with Poisson-distributed delays. The improvements obtained by the introduced generalized inequalities are demonstrated through examples.
Input-output framework for robust stability of time-varying delay systems The paper is devoted to the stability analysis of linear time varying delay. We first model the time varying delay system as an interconnected system between a known linear transformation and some operators depending explicitly on the delay. Embedding operators related to the delay into an uncertain set, stability of such system is then performed by adopting the quadratic separation approach. Having recognized that the conservatism comes from the choice of the feedback modeling and the operators definition, these first results are afterwards enhanced by using some redundant equation and scaling filter. At last, numerical examples are given to illustrate the results.
A note on Wirtinger-type integral inequalities for time-delay systems Different integral inequalities play important role when problems of stability analysis and controller synthesis for time-delay systems are considered. The connection of Jensen's inequality and extended Jensen's inequality is well understood now. To reduce the conservativeness introduced by the application of Jensen's inequality, several versions of the Wirtinger integral inequality have been published recently. This note presents a comparison between some of these inequalities.
On the Use of the Wirtinger Inequalities for Time-Delay Systems. The paper addresses the stability problem of linear time delay system. In the literature, the most popular approach to tackle this problem relies on the use of Lyapunov-Krasovskii functionals. Many results have proposed new functionals and techniques for deriving less and less conservative stability conditions. Nevertheless, all these approaches use the same trick, the well-known Jensen's inequality which generally induces some conservatism difficult to overcome. In light of those observations, we propose to reduce the conservatism of Lyapunov-Krasovskii functionals by introducing new classes of integral inequalities called Wirtinger inequalities. This integral type inequality is firstly shown to encompass Jensen's inequality and is then employed to derive new stability conditions. To this end, a slightly modified Lyapunov functional is proposed. Several examples illustrate the effectiveness of our methodology.
Stability of time-delay systems via Wirtinger-based double integral inequality Based on the Wirtinger-based integral inequality, a double integral form of the Wirtinger-based integral inequality (hereafter called as Wirtinger-based double integral inequality) is introduced in this paper. To show the effectiveness of the proposed inequality, two stability criteria for systems with discrete and distributed delays are derived within the framework of linear matrix inequalities (LMIs). The advantage of employing the proposed inequalities is illustrated via two numerical examples.
Improved exponential stability criteria for time-varying delayed neural networks This paper is concerned with the exponential stability for neural networks with mixed time-varying delays. By using a more general delay-partitioning approach, an augmented Lyapunov functional that contains some information about neuron activation function is constructed. In order to derive less conservative results, an adjustable parameter is introduced to divide the range of the activation function into two unequal subintervals. Moreover, the application of combination of integral inequalities further reduces the conservativeness of the obtained exponential stability conditions. Numerical examples illustrate the advantages of the proposed conditions when compared with other results from the literatures.
Bessel inequality for robust stability analysis of time-delay system This paper addresses the problem of the stability analysis for a linear time-delay systems via a robust analysis approach and especially the quadratic separation framework. To this end, we use the Bessel inequality for building operators that depend on the delay. They not only allow us to model the system as an uncertain feedback system but also to control the accuracy of the approximations made. Then, a set of LMIs conditions are proposed which tends on examples to the analytical bounds for both delay dependent stability and delay range stability.
Improved delay-dependent stability criteria for generalized neural networks with time-varying delays. This paper is concerned with the problem of stability analysis for generalized neural networks with time-varying delays. A novel integral inequality which includes several existing inequalities as special cases is presented. By employing a suitable Lyapunov–Krasovskii functional (LKF) and using the proposed integral inequality to estimate the derivative of the LKF, improved delay-dependent stability criteria expressed in terms of linear matrix inequalities are derived. Finally, four numerical examples are provided to demonstrate the effectiveness and the improvement of the proposed method.
Distributed Linear Estimation Over Sensor Networks We consider a network of sensors in which each node may collect noisy linear measurements of some unknown parameter. In this context, we study a distributed consensus diffusion scheme that relies only on bidirectional communication among neighbour nodes (nodes that can communicate and exchange data), and allows every node to compute an estimate of the unknown parameter that asymptotically converges to the true parameter. At each time iteration, a measurement update and a spatial diffusion phase are performed across the network, and a local least-squares estimate is computed at each node. The proposed scheme allows one to consider networks with dynamically changing communication topology, and it is robust to unreliable communication links and failures in measuring nodes. We show that under suitable hypotheses all the local estimates converge to the true parameter value.
System processes are software too This talk explores the application of software engineering tools, technologies, and approaches to developing and continuously improving systems by focusing on the systems' processes. The systems addressed are those that are complex coordinations of the efforts of humans, hardware devices, and software subsystems, where humans are on the “inside”, playing critical roles in the functioning of the system and its processes. The talk suggests that in such cases, the collection of processes that use the system is tantamount to being the system itself, suggesting that improving the system's processes amounts to improving the system. Examples of systems from a variety of different domains that have been addressed and improved in this way will be presented and explored. The talk will suggest some additional untried software engineering ideas that seem promising as vehicles for supporting system development and improvement, and additional system domains that seem ripe for the application of this kind of software-based process technology. The talk will emphasize that these applications of software engineering approaches to systems has also had the desirable effect of adding to our understandings of software engineering. These understandings have created a software engineering research agenda that is complementary to, and synergistic with, agendas for applying software engineering to system development and improvement.
Document ranking and the vector-space model Efficient and effective text retrieval techniques are critical in managing the increasing amount of textual information available in electronic form. Yet text retrieval is a daunting task because it is difficult to extract the semantics of natural language texts. Many problems must be resolved before natural language processing techniques can be effectively applied to a large collection of texts. Most existing text retrieval techniques rely on indexing keywords. Unfortunately, keywords or index terms alone cannot adequately capture the document contents, resulting in poor retrieval performance. Yet keyword indexing is widely used in commercial systems because it is still the most viable way by far to process large amounts of text. Using several simplifications of the vector-space model for text retrieval queries, the authors seek the optimal balance between processing efficiency and retrieval effectiveness as expressed in relevant document rankings
A Program Refinement Tool .   The refinement calculus for the development of programs from specifications is well suited to mechanised support. We review the requirements for tool support of refinement as gleaned from our experience with existing refinement tools, and report on the design and implementation of a new tool to support refinement based on these requirements. The main features of the new tool are close integration of refinement and proof in a single tool (the same mechanism is used for both), good management of the refinement context, an extensible theory base that allows the tool to be adapted to new application domains, and a flexible user interface.
Optimal Prefix Codes for Pairs of Geometrically Distributed Random Variables Optimal prefix codes are studied for pairs of independent, integer-valued symbols emitted by a source with a geometric probability distribution of parameter $q$ , $0 < q < 1$. By encoding pairs of symbols, it may be possible to reduce the redundancy penalty of symbol-by-symbol encoding, while preserving the simplicity of the encoding and decoding procedures typical of Golomb codes and their variants. It is shown that optimal codes for these so-called two-dimensional (2-D) geometric distributions are parameter singular, in the sense that a prefix code that is optimal for one value of the parameter $q$ cannot be optimal for any other value of $q$. This is in sharp contrast to the one-dimensional (1-D) case, where codes are optimal for positive-length intervals of the parameter $q$. Thus, in the 2-D case, it is infeasible to give a compact characterization of optimal codes for all values of the parameter $q$, as was done in the 1-D case. Instead, optimal codes are characterized for a discrete sequence of values of $q$ that provides good coverage of the unit interval. Specifically, optimal prefix codes are described for $q = 2^{-1/k}\\;(k \\geq 1)$, covering the range $q \\geq {{1} \\over {2}}$, and $q = 2^{-k}\\;(k 1)$, covering the range $q < {{1} \\over {2}}$. The described codes produce the expected reduction in redundancy with respect to the 1-D case, while maintaining low-complexity coding operations.
1.050779
0.025
0.012791
0.007729
0.004839
0.002252
0.000697
0.000162
0.000069
0.000001
0
0
0
0
Concurrent Scheduling Of Event-B Models Event-B is a refinement-based formal method that has been shown to be useful in developing concurrent and distributed programs. Large models can be decomposed into sub-models that can be refined semi-independently and executed in parallel. In this paper, we show how to introduce explicit control flow for the concurrent sub- models in the form of event schedules. We explore how schedules can be designed so that their application results in a correctness-preserving refinement step. For practical application, two patterns for schedule introduction are provided, together with their associated proof obligations. We demonstrate our method by applying it on the dining philosophers problem.
Creating sequential programs from event-B models Event-B is an emerging formal method with good tool support for various kinds of system modelling. However, the control flow in Event-B consists only of non-deterministic choice of enabled events. In many applications, notably in sequential program construction, more elaborate control flow mechanisms would be convenient. This paper explores a method, based on a scheduling language, for describing the flow of control. The aim is to be able to express schedules of events; to reason about their correctness; to create and verify patterns for introducing correct control flow. The conclusion is that using patterns, it is feasible to derive efficient sequential programs from event-based specifications in many cases.
Modelling and Analysing Dynamic Decentralised Systems We introduce a method to specify and analyse decentralised dynamic systems; the method is based on the combination of an event-based multi-process system specification approach with a multi-facet analysis approach that considers a reference abstract model and several specific ones derived from the abstract model in order to support facet-wise analysis. The method is illustrated with the modelling and the analysis of a mobile ad-hoc network. The Event-B framework and its related tools B4free and ProB are used to conduct the experiments.
Asynchronous system synthesis We propose a method for synthesising a set of components from a high-level specification of the intended behaviour of the target system. The designer proceeds via correctness-preserving transformation steps towards an implementable architecture of components which communicate asynchronously. The interface model of each component specifies the communication protocol used. At each step a pre-defined component is extracted and the correctness of the step is proved. This ensures the compatibility of the components. We use Action Systems as our formal approach to system design. The method is inspired by hardware-oriented approaches with their component libraries, but is more general. We also explore the possibility of using tool support to administer the derivation, as well as to assist in correctness proofs. Here we rely on the tools supporting the B Method, as this method is closely related to Action Systems and has good tool support.
csp2B: A Practical Approach to Combining CSP and B .   This paper describes the tool csp2B, which provides a means of combining CSP-like descriptions with standard B specifications. The notation of CSP provides a convenient way of describing the order in which the operations of a B machine may occur. The function of the tool is to convert CSP-like specifications into standard machine-readable B specifications, which means that they may be animated and appropriate proof obligations may be generated. Use of csp2B means that abstract specifications and refinements may be specified purely using CSP or using a combination of CSP and B. The translation is justified in terms of an operational semantics.
A correctness proof of a topology information maintenance protocol for a distributed computer network In order for the nodes of a distributed computer network to communicate, each node must have information about the network's topology. Since nodes and links sometimes crash, a scheme is needed to update this information. One of the major constraints on such a topology information scheme is that it may not involve a central controller. The Topology Information Protocol that was implemented on the MERIT Computer Network is presented and explained; this protocol is quite general and could be implemented on any computer network. It is based on Baran's “Hot Potato Heuristic Routing Doctrine.” A correctness proof of this Topology Information Protocol is also presented.
An assertional correctness proof of a distributed algorithm Using ordinary assertional methods for concurrent program verification, we prove the correctness of a distributed algorithm for maintaining message-routing tables in a network with communication lines that can fail. This shows that assertional reasoning about global states works well for distributed as well as nondistributed algorithms.
The lattice of data refinement We define a very general notion of data refinement which comprises the traditionalnotion of data refinement as a special case. Using the concepts of duals and adjoints wedefine converse commands and a find a symmetry between ordinary data refinement and adual (backward) data refinement. We show how ordinary and backward data refinementare interpreted as simulation and we derive rules for the piecewise data refinement ofprograms. Our results are valid for a general language, covering...
Separation and information hiding We investigate proof rules for information hiding, using the recent formalism of separation logic. In essence, we use the separating conjunction to partition the internal resources of a module from those accessed by the module's clients. The use of a logical connective gives rise to a form of dynamic partitioning, where we track the transfer of ownership of portions of heap storage between program components. It also enables us to enforce separation in the presence of mutable data structures with embedded addresses that may be aliased.
The Depth And Width Of Local Minima In Discrete Solution Spaces Heuristic search techniques such as simulated annealing and tabu search require ''tuning'' of parameters (i.e., the cooling schedule in simulated annealing, and the tabu list length in tabu search), to achieve optimum performance. In order for a user to anticipate the best choice of parameters, thus avoiding extensive experimentation, a better understanding of the solution space of the problem to be solved is needed. Two functions of the solution space, the maximum depth and the maximum width of local minima are discussed here, and sharp bounds on the value of these functions are given for the 0-1 knapsack problem and the cardinality set covering problem.
Data compression using adaptive coding and partial string matching The recently developed technique of arithmetic coding, in conjunction with a Markov model of the source, is a powerful method of data compression in situations where a linear treatment is inap- propriate. Adaptive coding allows the model to be constructed dy- namically by both encoder and decoder during the course of the transmission, and has been shown to incur a smaller coding overhead than explicit transmission of the model's statistics. But there is a basic conflict between the desire to use high-order Markov models and the need to have them formed quickly as the initial part of the message is sent. This paper describes how the conflict can be resolved with partial string matching, and reports experimental results which show that mixed-case English text can be coded in as little as 2.2 bits/ character with no prior knowledge of the source.
Architecture and applications of the Hy+ visualization system The Hy+ system is a generic visualization tool that supports a novel visual query language called GraphLog. In Hy+, visualizations are based on a graphical formalism that allows comprehensible representations of databases, queries, and query answers to be interactively manipulated. This paper describes the design, architecture, and features of Hy+ with a number of applications in software engineering and network management.
Notes on Nonrepetitive Graph Colouring. A vertex colouring of a graph is nonrepetitive on paths if there is no path upsilon(1), upsilon(2),...., upsilon(2t) such that upsilon(i) and upsilon(t+i) receive the same colour for all i = 1, 2,..., t. We determine the maximum density of a graph that admits a k-colouring that is nonrepetitive on paths. We prove that every graph has a subdivision that admits a 4-colouring that is nonrepetitive on paths. The best previous bound was 5. We also study colourings that are nonrepetitive on walks, and provide a conjecture that would imply that every graph with maximum degree Delta has a f (Delta)-colouring that is nonrepetitive on walks. We prove that every graph with treewidth k and maximum degree Delta has a O(k Delta)-colouring that is nonrepetitive on paths, and a O(k Delta(3))-colouring that is nonrepetitive on walks.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.24
0.12
0.04
0.034286
0.021118
0.012
0.003429
0.000131
0
0
0
0
0
0
Missing requirements and relationship discovery through proxy viewpoints model This paper addresses the problem of "missing requirements" in software requirements specification (SRS) expressed in natural language. Due to rapid changes in technology and business frequently witnessed over time, the original SRS documents often experience the problems of missing, not available, and hard-to-locate requirements. One of the flaws in earlier solutions to this problem has no consideration for missing requirements from multiple viewpoints. Furthermore, since such SRS documents represent an incomplete domain model, mannual discovery (identification and incorporation) of missing requirements and relationships is highly labor intensive and error-prone. Consequently, deriving and improving an efficient adaptation of SRS changes remain a complex problem. In this paper, we present a new methodology entitled "Proxy Viewpoints Model-based Requirements Discovery (PVRD)". The PVRD methodology provides an integrated framework to construct proxy viewpoints model from legacy status requirements and supports requirements discovery process as well as efficient management.
Viewpoints: principles, problems and a practical approach to requirements engineering The paper includes a survey and discussion of viewpoint&dash;oriented approaches to requirements engineering and a presentation of new work in this area which has been designed with practical application in mind. We describe the benefits of viewpoint&dash;oriented requirements engineering and describe the strengths and weaknesses of a number of viewpoint&dash;oriented methods. We discuss the practical problems of introducing viewpoint&dash;oriented requirements engineering into industrial software engineering practice and why these have prevented the widespread use of existing approaches. We then introduce a new model of viewpoints called Preview. Preview viewpoints are flexible, generic entities which can be used in different ways and in different application domains. We describe the novel characteristics of the Preview viewpoints model and the associated processes of requirements discovery, analysis and negotiation. Finally, we discuss how well this approach addresses some outstanding problems in requirements engineering (RE) and the practical industrial problems of introducing new requirements engineering methods.
Requirements engineering with viewpoints. The requirements engineering process involves a clear understanding of the requirements of the intended system. This includes the services required of the system, the system users, its environment and associated constraints. This process involves the capture, analysis and resolution of many ideas, perspectives and relationships at varying levels of detail. Requirements methods based on global reasoning appear to lack the expressive framework to adequately articulate this distributed requirements knowledge structure. The paper describes the problems in trying to establish an adequate and stable set of requirements and proposes a viewpoint-oriented requirements definition (VORD) method as a means of tackling some of these problems. This method structures the requirements engineering process using viewpoints associated with sources of requirements. The paper describes VORD in the light of current viewpoint-oriented requirements approaches and shows how it improves on them. A simple example of a bank auto-teller system is used to demonstrate the method.
Structured analysis for requirements definition The next article, by Ross and Schoman, is one of three papers chosen for inclusion in this book that deal with the subject of structured analysis. With its companion papers --- by Teichroew and Hershey [Paper 23] and by DeMarco [Paper 24] --- the paper gives a good idea of the direction that the software field probably will be following for the next several years. The paper addresses the problems of traditional systems analysis, and anybody who has spent any time as a systems analyst in a large EDP organization immediately will understand the problems and weaknesses of "requirements definition" that Ross and Schoman relate --- clearly not the sort of problems upon which academicians like Dijkstra, Wirth, Knuth, and most other authors in this book have focused! To stress the importance of proper requirements definition, Ross and Schoman state that "even the best structured programming code will not help if the programmer has been told to solve the wrong problem, or, worse yet, has been given a correct description, but has not understood it." In their paper, the authors summarize the problems associated with conventional systems analysis, and describe the steps that a "good" analysis approach should include. They advise that the analyst separate his logical, or functional description of the system from the physical form that it eventually will take; this is difficult for many analysts to do, since they assume, a priori, that the physical implementation of the system will consist of a computer. Ross and Schoman also emphasize the need to achieve a consensus among typically disparate parties: the user liaison personnel who interface with the developers, the "professional" systems analyst, and management. Since all of these people have different interests and different viewpoints, it becomes all the more important that they have a common frame of reference --- a common way of modeling the system-to-be. For this need, Ross and Schoman propose their solution" a proprietary package, known as SADT, that was developed by the consulting firm of SofTech for which the authors work. The SADT approach utilizes a top-down, partitioned, graphic model of a system. The model is presented in a logical, or abstract, fashion that allows for eventual implementation as a manual system, a computer system, or a mixture of both. This emphasis on graphic models of a system is distinctly different from that of the Teichroew and Hershey paper. It is distinctly similar to the approach suggested by DeMarco in "Structured Analysis and System Specification," the final paper in this collection. The primary difference between DeMarco and Ross/Schoman is that DeMarco and his colleagues at YOURI]DN inc. prefer circles, or "bubbles," whereas the SofTech group prefers rectangles. Ross and Schoman point out that their graphic modeling approach can be tied in with an "automated documentation" approach of the sort described by Teichroew and Hershey. Indeed, this approach gradually is beginning to be adopted by large EDP organizations; but for installations that can't afford the overhead of a computerized, automated systems analysis package, Ross and Schoman neglect one important aspect of systems modeling. That is the "data dictionary," in which all of the data elements pertinent to the new system are defined in the same logical top-down fashion as the rest of the model There also is a need to formalize mini-specifications, or "mini-specs" as DeMarco calls them; that is, the "business policy" associated with each bottom-level functional process of the system must be described in a manner far more rigorous than currently is being done. A weakness of the Ross/Schoman paper is its lack of detail about problem solutions: More than half the paper is devoted to a description of the problems of conventional analysis, but the SADT package is described in rather sketchy detail. There are additional documents on SADT available from SofTech, but the reader still will be left with the fervent desire that Messrs. Ross and Schoman and their colleagues at SofTech eventually will sit down and put their ideas into a full-scale book.
For large meta information of national integrated statistics Integrated statistics, synthesized from many survey statistics, form an important part of government statistics. Its typical example is the System of National Accounts. To develop such a system, it is necessary to make consistent preparation of 1) documents of methods, 2) programs, and 3) a database. However, it is usually not easy because of the large amount of data types connected with the system. In this paper, we formulate a language as a means to supporting the design of statistical data integration. This language is based on the data abstraction model and treats four types of semantic hierarchies; generalization, derivation, association (aggregation) and classification. We demonstrate that this language leads to natural documentation of statistical data integration, and meta information, used in both programs and a database for the integration, can be generated from the documents.
CORE : A Method for Controlled Requirement Expression
A Data Model for Requirements Analysis The use of a proper data model is a way to introduce rigour in requirements analysis, traditionally considered the most informal stage of software development, and responsible for the more costly errors. Several data models have emerged, but their comparative value is unclear. We think that an appraisal is only possible if the nature — and not only the goal — of requirements analysis is clearly perceived. We investigate this point and emphasise that requirements analysis is an activity of acquiring real-world knowledge, thereby forming a theory in which objectives can be stated and a solution specified. A suited language should thus restrict as little as possible the freedom of expression when describing some part of the world. A number of requirements are derived from this statement, such as the possibility to describe individual objects, as well as groups of objects, to explicitly refer to a global continuous time, to handle undefinedness, to allow simultaneous events, etc. When assessing the various existing data models with respect to these requirements, the entity-relationship model is found a suitable basis, but still lacking essential features. We extend it in a model called ERAE (entity, relationship, attribute, event), which is presented informally and illustrated on examples.
Software engineering in the twenty-first century
Operational Requirements Accommodation in Distributed System Design Operational requirements are qualities which influence a software system's entire development cycle. The investigation reported here concentrated on three of the most important operational requirements: reliability via fault tolerance, growth, and availability. Accommodation of these requirements is based on an approach to functional decomposition involving representation in terms of potentiafly independent processors, called virtual machines. Functional requirements may be accommodated through hierarchical decomposition of virtual machines, while performance requirements may be associated with individual virtual machines. Virtual machines may then be mapped to a representation of a confilguration of physical resources, so that performance requirements may be reconciled with available performance characteristics.
Online and off-line handwriting recognition: a comprehensive survey Handwriting has continued to persist as a means of communication and recording information in day-to-day life even with the introduction of new technologies. Given its ubiquity in human transactions, machine recognition of handwriting has practical significance, as in reading handwritten notes in a PDA, in postal addresses on envelopes, in amounts in bank checks, in handwritten fields in forms, etc. This overview describes the nature of handwritten language, how it is transduced into electronic data, and the basic concepts behind written language recognition algorithms. Both the on-line case (which pertains to the availability of trajectory data during writing) and the off-line case (which pertains to scanned images) are considered. Algorithms for preprocessing, character and word recognition, and performance with practical systems are indicated. Other fields of application, like signature verification, writer authentification, handwriting learning tools are also considered.
Relating Diagrams to Logic Although logic is general enough to describe anything that can be implemented on a digital computer, the unreadability of predi- cate calculus makes it unpopular as a design language. Instead, many graphic notations have been developed, each for a narrow range of purposes. Conceptual graphs are a graphic system of logic that is as general as predicate calculus, but they are as readable as the special- purpose diagrams. In fact, many popular diagrams can be viewed as special cases of conceptual graphs: type hierarchies, entity-relationship diagrams, parse trees, dataflow diagrams, flow charts, state-transition diagrams, and Petri nets. This paper shows how such diagrams can be translated to conceptual graphs and thence into other systems of logic, such as the Knowledge Interchange Format (KIF).
Behavioral Subtyping, Specification Inheritance, and Modular Reasoning 2006 CR Categories: D. 2.2 [Software Engineering] Design Tools and Techniques, Object-oriented design methods; D. 2.3 [Software Engineering] Coding Tools and Techniques, Object-oriented programming; D. 2.4 [Software Engineering] Software/Program Verification, Class invariants, correctness proofs, formal methods, programming by contract, reliability, tools, Eiffel, JML; D. 2.7 [Software Engineering] Distribution, Maintenance, and Enhancement, Documentation; D. 3.1 [Programming Languages] Formal Definitions and Theory, Semantics; D. 3.2 [Programming Languages] Language Classifications, Object-oriented languages; D. 3.3 [Programming Languages] Language Constructs and Features, classes and objects, inheritance; F. 3.1 [Logics and Meanings of Programs] Specifying and Verifying and Reasoning about Programs, Assertions, invariants, logics of programs, pre-and post-conditions, specification techniques;
Abstractions of non-interference security: probabilistic versus possibilistic. The Shadow Semantics (Morgan, Math Prog Construction, vol 4014, pp 359–378, 2006; Morgan, Sci Comput Program 74(8):629–653, 2009) is a possibilistic (qualitative) model for noninterference security. Subsequent work (McIver et al., Proceedings of the 37th international colloquium conference on Automata, languages and programming: Part II, 2010) presents a similar but more general quantitative model that treats probabilistic information flow. Whilst the latter provides a framework to reason about quantitative security risks, that extra detail entails a significant overhead in the verification effort needed to achieve it. Our first contribution in this paper is to study the relationship between those two models (qualitative and quantitative) in order to understand when qualitative Shadow proofs can be “promoted” to quantitative versions, i.e. in a probabilistic context. In particular we identify a subset of the Shadow’s refinement theorems that, when interpreted in the quantitative model, still remain valid even in a context where a passive adversary may perform probabilistic analysis. To illustrate our technique we show how a semantic analysis together with a syntactic restriction on the protocol description, can be used so that purely qualitative reasoning can nevertheless verify probabilistic refinements for an important class of security protocols. We demonstrate the semantic analysis by implementing the Shadow semantics in Rodin, using its special-purpose refinement provers to generate (and discharge) the required proof obligations (Abrial et al., STTT 12(6):447–466, 2010). We apply the technique to some small examples based on secure multi-party computations.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2496
0.017829
0.014891
0.000957
0.00019
0.000106
0.000039
0.000002
0
0
0
0
0
0
Integrating safety analysis and requirements engineering Some systems failures are due to defects in manufacturing and design, however that there are a significant number of system failures which result from errors, omissions and inconsistencies in the system requirements. We thus need methods to support a 'safe' requirements engineering process whose objectives are to specify system requirements such that system states which compromise safety are avoided and to include, along with the requirements, a justification or safety case which explains why the specified system is indeed safe. This paper describes the extension of a viewpoint-based requirements method to incorporate safety analysis.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Security Enhancement of Cooperative Single Carrier Systems In this paper, the impact of multiple active eavesdroppers on cooperative single carrier systems with multiple relays and multiple destinations is examined. To achieve the secrecy diversity gains in the form of opportunistic selection, a two-stage scheme is proposed for joint relay and destination selection, in which, after the selection of the relay with the minimum effective maximum signal-to-noise ratio (SNR) to a cluster of eavesdroppers, the destination that has the maximum SNR from the chosen relay is selected. To accurately assess the secrecy performance, exact and asymptotic expressions are obtained in closed form for several security metrics, including the secrecy outage probability, probability of nonzero secrecy rate, and ergodic secrecy rate in frequency selective fading. Based on the asymptotic analysis, key design parameters, such as secrecy diversity gain, secrecy array gain, secrecy multiplexing gain, and power cost, are characterized, from which new insights are drawn. In addition, it is concluded that secrecy performance limits occur when the average received power at the eavesdropper is proportional to the counterpart at the destination. In particular, for the secrecy outage probability, it is confirmed that the secrecy diversity gain collapses to zero with outage floor, whereas for the ergodic secrecy rate, it is confirmed that its slope collapses to zero with capacity ceiling.
Cooperative wireless communications: a cross-layer approach This article outlines one way to address these problems by using the notion of cooperation between wireless nodes. In cooperative communications, multiple nodes in a wireless network work together to form a virtual antenna array. Using cooperation, it is possible to exploit the spatial diversity of the traditional MIMO techniques without each node necessarily having multiple antennas. Multihop networks use some form of cooperation by enabling intermediate nodes to forward the message from source to destination. However, cooperative communication techniques described in this article are fundamentally different in that the relaying nodes can forward the information fully or in part. Also the destination receives multiple versions of the message from the source, and one or more relays and combines these to obtain a more reliable estimate of the transmitted signal as well as higher data rates. The main advantages of cooperative communications are presented
On the Performance of Cognitive Underlay Multihop Networks with Imperfect Channel State Information. This paper proposes and analyzes cognitive multihop decode-and-forward networks in the presence of interference due to channel estimation errors. To reduce interference on the primary network, a simple yet effective back-off control power method is applied for secondary multihop networks. For a given threshold of interference probability at the primary network, we derive the maximum back-off control power coefficient, which provides the best performance for secondary multihop networks. Moreover, it is shown that the number of hops for secondary network is upper-bounded under the fixed settings of the primary network. For secondary multihop networks, new exact and asymptotic expressions for outage probability (OP), bit error rate (BER) and ergodic capacity over Rayleigh fading channels are derived. Based on the asymptotic OP and BEP, a pivotal conclusion is reached that the secondary multihop network offers the same diversity order as compared with the network without back off. Finally, we verify the performance analysis through various numerical examples which confirm the correctness of our analysis for many channel and system settings and provide new insight into the design and optimization of cognitive multihop networks.
Robust Secure Beamforming in MISO Full-Duplex Two-Way Secure Communications Considering worst-case channel uncertainties, we investigate the robust secure beamforming design problem in multiple-input-single-output full-duplex two-way secure communications. Our objective is to maximize worst-case sum secrecy rate under weak secrecy conditions and individual transmit power constraints. Since the objective function of the optimization problem includes both convex and concave terms, we propose to transform convex terms into linear terms. We decouple the problem into four optimization problems and employ alternating optimization algorithm to obtain the locally optimal solution. Simulation results demonstrate that our proposed robust secure beamforming scheme outperforms the non-robust one. It is also found that when the regions of channel uncertainties and the individual transmit power constraints are sufficiently large, because of self-interference, the proposed two-way robust secure communication is proactively degraded to one-way communication.
Secure Relaying in Multihop Communication Systems. This letter considers improving end-to-end secrecy capacity of a multihop decode-and-forward relaying system. First, a secrecy rate maximization problem without transmitting artificial noise (AN) is considered, following which the AN-aided secrecy schemes are proposed. Assuming that global channel state information (CSI) is available, an optimal power splitting solution is proposed. Furthermore, an iterative joint optimization of transmit power and power splitting coefficient has also been considered. For the scenario of no eavesdropper's CSI, we provide a suboptimal solution. The simulation results demonstrate that the AN-aided optimal scheme outperforms other schemes.
Performance Analysis of Two-Way Multi-Antenna Multi-Relay Networks With Hardware Impairments. In this paper, a two-way multi-antenna and multi-relay amplify-and-forward (AF) network with hardware impairments is analyzed. The opportunistic relay selection scheme is used in the relay selection. Maximum ratio transmission and maximum ratio combining were used in transmitted and received slot by the multi-antenna relay, respectively. In this paper, we consider two AF protocols, one is variable gain protocol and the other is fixed gain protocol. Especially, the closed-from expressions for the outage probability of the system and the closed-expressions for the throughput of the system are derived, respectively. The system performance at high signal-to-noise ratio (SNR) is very important in real scenes. In order to analyze the impact of hardware impairments on the system at high SNRs, the asymptotic analysis for the system is also derived. In order to analyze the power efficiency, the closed-form expression for the energy-efficiency performance is derived, and a brief analysis is given, which provides a powerful reference for engineering practice. In addition, simulation results are provided to show the correctness of our analysis. From the results, we know that the system will have better performance when the number of relay is growing larger and the impairments' level is growing smaller. Moreover, the results reveal that the outage floor and the throughput bound appear when the hardware impairments exist.
A New Look at Dual-Hop Relaying: Performance Limits with Hardware Impairments. Physical transceivers have hardware impairments that create distortions which degrade the performance of communication systems. The vast majority of technical contributions in the area of relaying neglect hardware impairments and, thus, assume ideal hardware. Such approximations make sense in low-rate systems, but can lead to very misleading results when analyzing future high-rate systems. This paper quantifies the impact of hardware impairments on dual-hop relaying, for both amplify-and-forward and decode-and-forward protocols. The outage probability (OP) in these practical scenarios is a function of the effective end-to-end signal-to-noise-and-distortion ratio (SNDR). This paper derives new closed-form expressions for the exact and asymptotic OPs, accounting for hardware impairments at the source, relay, and destination. A similar analysis for the ergodic capacity is also pursued, resulting in new upper bounds. We assume that both hops are subject to independent but non-identically distributed Nakagami-m fading. This paper validates that the performance loss is small at low rates, but otherwise can be very substantial. In particular, it is proved that for high signal-to-noise ratio (SNR), the end-to-end SNDR converges to a deterministic constant, coined the SNDR ceiling, which is inversely proportional to the level of impairments. This stands in contrast to the ideal hardware case in which the end-to-end SNDR grows without bound in the high-SNR regime. Finally, we provide fundamental design guidelines for selecting hardware that satisfies the requirements of a practical relaying system.
List processing in real time on a serial computer A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical implementations of list processing systems lack this property because allocating a list cell from the heap may cause a garbage collection, which process requires time proportional to the heap size to finish. A real-time list processing system is presented which continuously reclaims garbage, including directed cycles, while linearizing and compacting the accessible cells into contiguous locations to avoid fragmenting the free storage pool. The program is small and requires no time-sharing interrupts, making it suitable for microcode. Finally, the system requires the same average time, and not more than twice the space, of a classical implementation, and those space requirements can be reduced to approximately classical proportions by compact list representation. Arrays of different sizes, a program stack, and hash linking are simple extensions to our system, and reference counting is found to be inferior for many applications.
Scikit-learn: Machine Learning in Python Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net.
Language constructs for managing change in process-centered environments Change is pervasive during software development, affecting objects, processes, and environments. In process centered environments, change management can be facilitated by software-process programming, which formalizes the representation of software products and processes using software-process programming languages (SPPLs). To fully realize this goal SPPLs should include constructs that specifically address the problems of change management. These problems include lack of representation of inter-object relationships, weak semantics for inter-object relationships, visibility of implementations, lack of formal representation of software processes, and reliance on programmers to manage change manually.APPL/A is a prototype SPPL that addresses these problems. APPL/A is an extension to Ada.. The principal extensions include abstract, persistent relations with programmable implementations, relation attributes that may be composite and derived, triggers that react to relation operations, optionally-enforceable predicates on relations, and five composite statements with transaction-like capabilities.APPL/A relations and triggers are especially important for the problems raised here. Relations enable inter-object relationships to be represented explicitly and derivation dependencies to be maintained automatically. Relation bodies can be programmed to implement alternative storage and computation strategies without affecting users of relation specifications. Triggers can react to changes in relations, automatically propagating data, invoking tools, and performing other change management tasks. Predicates and the transaction-like statements support change management in the face of evolving standards of consistency. Together, these features mitigate many of the problems that complicate change management in software processes and process-centered environments.
Animation of Object-Z Specifications with a Set-Oriented Prototyping Language
3rd international workshop on software evolution through transformations: embracing change Transformation-based techniques such as refactoring, model transformation and model-driven development, architectural reconfiguration, etc. are at the heart of many software engineering activities, making it possible to cope with an ever changing environment. This workshop provides a forum for discussing these techniques, their formal foundations and applications.
One VM to rule them all Building high-performance virtual machines is a complex and expensive undertaking; many popular languages still have low-performance implementations. We describe a new approach to virtual machine (VM) construction that amortizes much of the effort in initial construction by allowing new languages to be implemented with modest additional effort. The approach relies on abstract syntax tree (AST) interpretation where a node can rewrite itself to a more specialized or more general node, together with an optimizing compiler that exploits the structure of the interpreter. The compiler uses speculative assumptions and deoptimization in order to produce efficient machine code. Our initial experience suggests that high performance is attainable while preserving a modular and layered architecture, and that new high-performance language implementations can be obtained by writing little more than a stylized interpreter.
New results on stability analysis for systems with discrete distributed delay The integral inequality technique is widely used to derive delay-dependent conditions, and various integral inequalities have been developed to reduce the conservatism of the conditions derived. In this study, a new integral inequality was devised that is tighter than existing ones. It was used to investigate the stability of linear systems with a discrete distributed delay, and a new stability condition was established. The results can be applied to systems with a delay belonging to an interval, which may be unstable when the delay is small or nonexistent. Three numerical examples demonstrate the effectiveness and the smaller conservatism of the method.
1.122
0.12
0.12
0.12
0.12
0.12
0.056
0
0
0
0
0
0
0
On the classification of NP-complete problems in terms of their correlation coefficient Local search and its variants simulated annealing and tabu search are very popular meta-heuristics to approximatively solve NP-hard optimization problems. Several experimental studies in the literature have shown that in practice some problems (e.g. the Traveling Salesman Problem, Quadratic Assignment Problem) behave very well with these heuristics, whereas others do not (e.g. the Low Autocorrelation Binary String Problem). The autocorrelation function, introduced by Weinberger, measures the ruggedness of a landscape which is formed by a cost function and a neighborhood. We use a derived parameter, named the autocorrelation coefficient, as a tool to better understand these phenomena. In this paper we mainly study cost functions including penalty terms. Our results can be viewed as a first attempt to theoretically justify why it is often better in practice to enlarge the solution space and add penalty terms than to work solely on feasible solutions. Moreover, some new results as well as previously known results allow us to obtain a hierarchy of combinatorial optimization problems relatively to their ruggedness. Comparing this classification with experimental results reported in the literature yields a good agreement between ruggedness and difficulty for local search methods. In this way, we are also able to justify theoretically why a neighborhood is better than another for a given problem.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Dynamic process modelling using Petri nets with applications to nuclear power plant emergency management
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A looped-functional approach for robust stability analysis of linear impulsive systems A new functional-based approach is developed for the stability analysis of linear impulsive systems. The new method, which introduces looped functionals, considers non-monotonic Lyapunov functions and leads to LMI conditions devoid of exponential terms. This allows one to easily formulate dwell-time results, for both certain and uncertain systems. It is also shown that this approach may be applied to a wider class of impulsive systems than existing methods. Some examples, notably on sampled-data systems, illustrate the efficiency of the approach.
A novel stability analysis of linear systems under asynchronous samplings. This article proposes a novel approach to assess the stability of continuous linear systems with sampled-data inputs. The method, which is based on the discrete-time Lyapunov theorem, provides easy tractable stability conditions for the continuous-time model. Sufficient conditions for asymptotic and exponential stability are provided dealing with synchronous and asynchronous samplings and uncertain systems. An additional stability analysis is provided for the cases of multiple sampling periods and packet losses. Several examples show the efficiency of the method.
Recent developments on the stability of systems with aperiodic sampling: An overview. This article presents basic concepts and recent research directions about the stability of sampled-data systems with aperiodic sampling. We focus mainly on the stability problem for systems with arbitrary time-varying sampling intervals which has been addressed in several areas of research in Control Theory. Systems with aperiodic sampling can be seen as time-delay systems, hybrid systems, Input/Output interconnections, discrete-time systems with time-varying parameters, etc. The goal of the article is to provide a structural overview of the progress made on the stability analysis problem. Without being exhaustive, which would be neither possible nor useful, we try to bring together results from diverse communities and present them in a unified manner. For each of the existing approaches, the basic concepts, fundamental results, converse stability theorems (when available), and relations with the other approaches are discussed in detail. Results concerning extensions of Lyapunov and frequency domain methods for systems with aperiodic sampling are recalled, as they allow to derive constructive stability conditions. Furthermore, numerical criteria are presented while indicating the sources of conservatism, the problems that remain open and the possible directions of improvement. At last, some emerging research directions, such as the design of stabilizing sampling sequences, are briefly discussed.
Stability analysis of systems with aperiodic sample-and-hold devices Motivated by the widespread use of networked and embedded control systems, improved stability conditions are derived for sampled-data feedback control systems with uncertainly time-varying sampling intervals. The results are derived by exploiting the passivity-type property of the operator arising in the input-delay approach to the system in addition to the gain of the operator, and are hence less conservative than existing ones.
Convex Dwell-Time Characterizations for Uncertain Linear Impulsive Systems New sufficient conditions for the characterization of dwell-times for linear impulsive systems are proposed and shown to coincide with continuous decrease conditions of a certain class of looped-functionals, a recently introduced type of functionals suitable for the analysis of hybrid systems. This approach allows to consider Lyapunov functions that evolve nonmonotonically along the flow of the system in a new way, broadening then the admissible class of systems which may be analyzed. As a byproduct, the particular structure of the obtained conditions makes the method is easily extendable to uncertain systems by exploiting some convexity properties. Several examples illustrate the approach.
Robust sampled-data stabilization of linear systems: an input delay approach A new approach to robust sampled-data control is introduced. The system is modelled as a continuous-time one, where the control input has a piecewise-continuous delay. Sufficient linear matrix inequalities (LMIs) conditions for sampled-data state-feedback stabilization of such systems are derived via descriptor approach to time-delay systems. The only restriction on the sampling is that the distance between the sequel sampling times is not greater than some prechosen h>0 for which the LMIs are feasible. For h→0 the conditions coincide with the necessary and sufficient conditions for continuous-time state-feedback stabilization. Our approach is applied to two problems: to sampled-data stabilization of systems with polytopic type uncertainities and to regional stabilization by sampled-data saturated state-feedback.
Stability and Stabilization of Takagi-Sugeno Fuzzy Systems via Sampled-Data and State Quantized Controller. In this paper, we investigate the problem of stability and stabilization for sampled-data fuzzy systems with state quantization. By using an input delay approach, the sampled-data fuzzy systems with state quantization are transformed into a continuous-time system with a delay in the state. The transformed system contains nondifferentiable time-varying state delay. Based on some integral techniques...
Stability Analysis for Delayed Neural Networks Considering Both Conservativeness and Complexity. This paper investigates delay-dependent stability for continuous neural networks with a time-varying delay. This paper aims at deriving a new stability criterion, considering tradeoff between conservativeness and calculation complexity. A new Lyapunov-Krasovskii functional with simple augmented terms and delay-dependent terms is constructed, and its derivative is estimated by several techniques, i...
A survey of linear matrix inequality techniques in stability analysis of delay systems Recent years have witnessed a resurgence of research interests in analysing the stability of time-delay systems. Many results have been reported using a variety of approaches and techniques. However, much of the focus has been laid on the use of the Lyapunov-Krasovskii theory to derive sufficient stability conditions in the form of linear matrix inequalities. The purpose of this article is to survey the recent results developed to analyse the asymptotic stability of time-delay systems. Both delay-independent and delay-dependent results are reported in the article. Special emphases are given to the issues of conservatism of the results and computational complexity. Connections of certain delay-dependent stability results are also discussed.
A logarithmic quantization index modulation for perceptually better data hiding In this paper, a novel arrangement for quantizer levels in the Quantization Index Modulation (QIM) method is proposed. Due to perceptual advantages of logarithmic quantization, and in order to solve the problems of a previous logarithmic quantizationbased method, we used the compression function of µ-Law standard for quantization. In this regard, the host signal is first transformed into the logarithmic domain using the µ-Law compression function. Then, the transformed data is quantized uniformly and the result is transformed back to the original domain using the inverse function. The scalar method is then extended to vector quantization. For this, the magnitude of each host vector is quantized on the surface of hyperspheres which follow logarithmic radii. Optimum parameter µ for both scalar and vector cases is calculated according to the host signal distribution. Moreover, inclusion of a secret key in the proposed method, similar to the dither modulation in QIM, is introduced. Performance of the proposed method in both cases is analyzed and the analytical derivations are verified through extensive simulations on artificial signals. The method is also simulated on real images and its performance is compared with previous scalar and vector quantization-based methods. Results show that this method features stronger a watermark in comparison with conventional QIM and, as a result, has better performance while it does not suffer from the drawbacks of a previously proposed logarithmic quantization algorithm.
Incorporating usability into requirements engineering tools The development of a computer system requires the definition of a precise set of properties or constraints that the system must satisfy with maximum economy and efficiency. This definition process requires a significant amount of communication between the requestor and the developer of the system. In recent years, several methodologies and tools have been proposed to improve this communication process. This paper establishes a framework for examining the methodologies and techniques, charting the progress made, and identifying opportunities to improve the communication capabilities of a requirements engineering tool.
Adapting function point analysis to Jackson system development Overviews of the estimation model function point analysis (FPA) and the operational software development method Jackson system development (JSD) are given. The adaptation to JSD projects of two main versions of the FPA method is described. A number of issues are raised concerning both the applicability of FPA-based techniques to JSD projects and general ways in which FPA estimation might be improved. A summary is presented of the results obtained by applying the two adaptations to an actual commercial JSD project, and various objectives are highlighted for future research
SPMD execution of programs with dynamic data structures on distributed memory machines A combination of language features and compilation techniques that permits SPMD (single-program multiple-data) execution of programs with pointer-based dynamic data structures is presented. The Distributed Dynamic Pascal (DDP) language, which supports the construction and manipulation of local as well as distributed data structures, is described. The compiler techniques developed translate a sequential DDP program for SPMD execution in which all processors are provided with the same program but each processor executes only that part of the program which operates on the elements of the distributed data structures local to the processor. Therefore, the parallelism implicit in a sequential program is exploited. An approach for implementing pointers that is based on the generation of names for the nodes in a dynamic data structure is presented. The name-based strategy makes possible the dynamic distribution of data structures among the processors as well as the traversal of distributed data structures without interprocessor communication
More than requirements: Applying requirements engineering techniques to the challenge of setting corporate intellectual policy, an experience report Creation and adoption of corporate policies requires significant commitment of scarce senior management resources. In the absence of processes and tools, convergence upon final policy and may not be achieved in a timely manner. Significant similarities between policy and requirements documents suggest that requirements engineering techniques could be used to generate policy. However, neither evidence of feasibility of this approach nor theoretical investigation is present in the research literature. This paper reports upon our experience from an exploratory study where well-established requirements engineering methodologies were applied to generate corporate intellectual property policy. Interview, brainstorming and survey techniques were used to successfully apply structure and process to the task, generating a new corporate intellectual property policy that met or exceeded all stakeholder goals. The materials gathered during stakeholder interactions and analysis not only provided functional guidance for the policy itself, but also non-functional guidance with respect to the diversity of stakeholder opinions and the strength with which opinions were held. This knowledge greatly facilitated the creation of draft policy: this insider knowledge increased our expectation of stakeholder acceptance and also facilitated subsequent negotiation efforts. The feasibility of applying RE techniques to crafting corporate policy has been demonstrated and the results show sufficient promise that further investigation is warranted.
1.040752
0.014588
0.013374
0.010237
0.007028
0.003148
0.000154
0.00003
0.000017
0.000001
0
0
0
0
Static Homogeneous Multiprocessor Task Graph Scheduling Using Ant Colony Optimization. Nowadays, the utilization of multiprocessor environments has been increased due to the increase in time complexity of application programs and decrease in hardware costs. In such architectures during the compilation step, each program is decomposed into the smaller and maybe dependent segments so-called tasks. Precedence constraints, required execution times of the tasks, and communication costs among them are modeled using a directed acyclic graph (DAG) named task-graph. All the tasks in the task-graph must be assigned to a predefined number of processors in such a way that the precedence constraints are preserved, and the program's completion time is minimized, and this is an NP-hard problem from the time-complexity point of view. The results obtained by different approaches are dominated by two major factors; first, which order of tasks should be selected (sequence subproblem), and second, how the selected sequence should be assigned to the processors (assigning subproblem). In this paper, a hybrid proposed approach has been presented, in which two different artificial ant colonies cooperate to solve the multiprocessor task-scheduling problem; one colony to tackle the sequence subproblem, and another to cope with assigning subproblem. The utilization of background knowledge about the problem (different priority measurements of the tasks) has made the proposed approach very robust and efficient. 125 different task-graphs with various shape parameters such as size, communication-to-computation ratio and parallelism have been utilized for a comprehensive evaluation of the proposed approach, and the results show its superiority versus the other conventional methods from the performance point of view.
Improving the performance of Apache Hadoop on pervasive environments through context-aware scheduling. This article proposes to improve Apache Hadoop scheduling through a context-aware approach. Apache Hadoop is the most popular implementation of the MapReduce paradigm for distributed computing, but its design does not adapt automatically to computing nodes’ context and capabilities. By introducing context-awareness into Hadoop, we intent to dynamically adapt its scheduling to the execution environment. This is a necessary feature in the context of pervasive grids, which are heterogeneous, dynamic and shared environments. The solution has been incorporated into Hadoop and assessed through controlled experiments. The experiments demonstrate that context-awareness provides comparative performance gains, especially when some of the resources disappear during execution.
A lightweight decentralized service placement policy for performance optimization in fog computing A decentralized optimization policy for service placement in fog computing is presented. The optimization is addressed to place most popular services as closer to the users as possible. The experimental validation is done in the iFogSim simulator and by comparing our algorithm with the simulator’s built-in policy. The simulation is characterized by modeling a microservice-based application for different experiment sizes. Results showed that our decentralized algorithm places most popular services closer to users, improving network usage and service latency of the most requested applications, at the expense of a latency increment for the less requested services and a greater number of service migrations.
An incremental ant colony optimization based approach to task assignment to processors for multiprocessor scheduling. Optimized task scheduling is one of the most important challenges to achieve high performance in multiprocessor environments such as parallel and distributed systems. Most introduced task-scheduling algorithms are based on the so-called list scheduling technique. The basic idea behind list scheduling is to prepare a sequence of nodes in the form of a list for scheduling by assigning them some priority measurements, and then repeatedly removing the node with the highest priority from the list and allocating it to the processor providing the earliest start time (EST). Therefore, it can be inferred that the makespans obtained are dominated by two major factors: (1) which order of tasks should be selected (sequence subproblem); (2) how the selected order should be assigned to the processors (assignment subproblem). A number of good approaches for overcoming the task sequence dilemma have been proposed in the literature, while the task assignment problem has not been studied much. The results of this study prove that assigning tasks to the processors using the traditional EST method is not optimum; in addition, a novel approach based on the ant colony optimization algorithm is introduced, which can find far better solutions.
Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without...
Automatic speech recognition- an approach for designing inclusive games Computer games are now a part of our modern culture. However, certain categories of people are excluded from this form of entertainment and social interaction because they are unable to use the interface of the games. The reason for this can be deficits in motor control, vision or hearing. By using automatic speech recognition systems (ASR), voice driven commands can be used to control the game, which can thus open up the possibility for people with motor system difficulty to be included in game communities. This paper aims at find a standard way of using voice commands in games which uses a speech recognition system in the backend, and that can be universally applied for designing inclusive games. Present speech recognition systems however, do not support emotions, attitudes, tones etc. This is a drawback because such expressions can be vital for gaming. Taking multiple types of existing genres of games into account and analyzing their voice command requirements, a general ASRS module is proposed which can work as a common platform for designing inclusive games. A fuzzy logic controller proposed then is to enhance the system. The standard voice driven module can be based on algorithm or fuzzy controller which can be used to design software plug-ins or can be included in microchip. It then can be integrated with the game engines; creating the possibility of voice driven universal access for controlling games.
A novel method for solving the fully neutrosophic linear programming problems The most widely used technique for solving and optimizing a real-life problem is linear programming (LP), due to its simplicity and efficiency. However, in order to handle the impreciseness in the data, the neutrosophic set theory plays a vital role which makes a simulation of the decision-making process of humans by considering all aspects of decision (i.e., agree, not sure and disagree). By keeping the advantages of it, in the present work, we have introduced the neutrosophic LP models where their parameters are represented with a trapezoidal neutrosophic numbers and presented a technique for solving them. The presented approach has been illustrated with some numerical examples and shows their superiority with the state of the art by comparison. Finally, we conclude that proposed approach is simpler, efficient and capable of solving the LP models as compared to other methods.
Secure Medical Data Transmission Model for IoT-Based Healthcare Systems. Due to the significant advancement of the Internet of Things (IoT) in the healthcare sector, the security, and the integrity of the medical data became big challenges for healthcare services applications. This paper proposes a hybrid security model for securing the diagnostic text data in medical images. The proposed model is developed through integrating either 2-D discrete wavelet transform 1 level (2D-DWT-1L) or 2-D discrete wavelet transform 2 level (2D-DWT-2L) steganography technique with a proposed hybrid encryption scheme. The proposed hybrid encryption schema is built using a combination of Advanced Encryption Standard, and Rivest, Shamir, and Adleman algorithms. The proposed model starts by encrypting the secret data; then it hides the result in a cover image using 2D-DWT-1L or 2D-DWT-2L. Both color and gray-scale images are used as cover images to conceal different text sizes. The performance of the proposed system was evaluated based on six statistical parameters; the peak signal-to-noise ratio (PSNR), mean square error (MSE), bit error rate (BER), structural similarity (SSIM), structural content (SC), and correlation. The PSNR values were relatively varied from 50.59 to 57.44 in case of color images and from 50.52 to 56.09 with the gray scale images. The MSE values varied from 0.12 to 0.57 for the color images and from 0.14 to 0.57 for the gray scale images. The BER values were zero for both images, while SSIM, SC, and correlation values were ones for both images. Compared with the state-of-the-art methods, the proposed model proved its ability to hide the confidential patient's data into a transmitted cover image with high imperceptibility, capacity, and minimal deterioration in the received stego-image.
Symbolic Model Checking Symbolic model checking is a powerful formal specification and verification method that has been applied successfully in several industrial designs. Using symbolic model checking techniques it is possible to verify industrial-size finite state systems. State spaces with up to 1030 states can be exhaustively searched in minutes. Models with more than 10120 states have been verified using special techniques.
Strategies for information requirements determination Correct and complete information requirements are key ingredients in planning organizational information systems and in implementing information systems applications. Yet, there has been relatively little research on information requirements determination, and there are relatively few practical, well-formulated procedures for obtaining complete, correct information requirements. Methods for obtaining and documenting information requirements are proposed, but they tend to be presented as general solutions rather than alternative methods for implementing a chosen strategy of requirements determination. This paper identifies two major levels of requirements: the organizational information requirements reflected in a planned portfolio of applications and the detailed information requirements to be implemented in a specific application. The constraints on humans as information processors are described in order to explain why "asking" users for information requirements may not yield a complete, correct set. Various strategies for obtaining information requirements are explained. Examples are given of methods that fit each strategy. A contingency approach is then presented for selecting an information requirements determination strategy. The contingency approach is explained both for defining organizational information requirements and for defining specific, detailed requirements in the development of an application.
A superimposition control construct for distributed systems A control structure called a superimposition is proposed. The structure contains schematic abstractions of processes called roletypes in its declaration. Each roletype may be bound to processes from a basic distributed algorithm, and the operations of the roletype will then execute interleaved with those of the basic processes, over the same state space. This structure captures a kind of modularity natural for distributed programming, which previously has been treated using a macro-like implantation of code. The elements of a superimposition are identified, a syntax is suggested, correctness criteria are defined, and examples are presented.
Behavioral Subtyping, Specification Inheritance, and Modular Reasoning 2006 CR Categories: D. 2.2 [Software Engineering] Design Tools and Techniques, Object-oriented design methods; D. 2.3 [Software Engineering] Coding Tools and Techniques, Object-oriented programming; D. 2.4 [Software Engineering] Software/Program Verification, Class invariants, correctness proofs, formal methods, programming by contract, reliability, tools, Eiffel, JML; D. 2.7 [Software Engineering] Distribution, Maintenance, and Enhancement, Documentation; D. 3.1 [Programming Languages] Formal Definitions and Theory, Semantics; D. 3.2 [Programming Languages] Language Classifications, Object-oriented languages; D. 3.3 [Programming Languages] Language Constructs and Features, classes and objects, inheritance; F. 3.1 [Logics and Meanings of Programs] Specifying and Verifying and Reasoning about Programs, Assertions, invariants, logics of programs, pre-and post-conditions, specification techniques;
Reflection in direct style A reflective language enables us to access, inspect, and/or modify the language semantics from within the same language framework. Although the degree of semantics exposure differs from one language to another, the most powerful approach, referred to as the behavioral reflection, exposes the entire language semantics (or the language interpreter) that defines behavior of user programs for user inspection/modification. In this paper, we deal with the behavioral reflection in the context of a functional language Scheme. In particular, we show how to construct a reflective interpreter where user programs are interpreted by the tower of metacircular interpreters and have the ability to change any parts of the interpreters during execution. Its distinctive feature compared to the previous work is that the metalevel interpreters observed by users are written in direct style. Based on the past attempt of the present author, the current work solves the level-shifting anomaly by defunctionalizing and inspecting the top of the continuation frames. The resulting system enables us to freely go up and down the levels and access/modify the direct-style metalevel interpreter. This is in contrast to the previous system where metalevel interpreters were written in continuation-passing style (CPS) and only CPS functions could be exposed to users for modification.
Hyperspectral image compression based on lapped transform and Tucker decomposition In this paper, we present a hyperspectral image compression system based on the lapped transform and Tucker decomposition (LT-TD). In the proposed method, each band of a hyperspectral image is first decorrelated by a lapped transform. The transformed coefficients of different frequencies are rearranged into three-dimensional (3D) wavelet sub-band structures. The 3D sub-bands are viewed as third-order tensors. Then they are decomposed by Tucker decomposition into a core tensor and three factor matrices. The core tensor preserves most of the energy of the original tensor, and it is encoded using a bit-plane coding algorithm into bit-streams. Comparison experiments have been performed and provided, as well as an analysis regarding the contributing factors for the compression performance, such as the rank of the core tensor and quantization of the factor matrices. HighlightsWe design a hyperspectral image compression using lapped transform and Tucker decomposition.Each band of a hyperspectral image is decorrelated by a lapped transform.Transformed coefficients of various frequencies are rearranged in 3Dwavelet subband structures.3D subbands are viewed as third-order tensors, decomposed by Tucker decomposition.The core tensor is encoded using a bit-plane coding algorithm into bit-streams.
1.101667
0.103333
0.103333
0.101667
0.051667
0.001667
0.000667
0.000056
0
0
0
0
0
0
Lean Cuisine+: an executable graphical notation for describing direct manipulation interfaces The paper describes an executable semi-formal graphical notation, Lean Cuisine+, for describing the underlying behaviour of event-based direct manipulation interfaces, and outlines a methodology for constructing Lean Cuisine+ specifications. Lean Cuisine+ is a multilayered notation, and is a development of the meneme model of Lean Cuisine. A motivation of the research stems from the need for tools and techniques to facilitate high-level interface design. The research supports and brings together a number of views concerning the requirements of notations at this level. These are that a notation should be semi-formal, graphical, executable, and object-based, and that to be most effective it should be targeted at a specific category of interaction. The Lean Cuisine+ notation meets all these criteria, the underlying meneme model matching closely with the selection-based nature of direct manipulation interfaces.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Bounded Verification of Voting Software We present a case-study in which vote-tallying software is analyzed using a bounded verificationtechnique, whereby all executions of a procedure are exhaustively examined within a finite space given by a bound on the size of the heap and the number of loop unrollings. The technique involves an encoding of the procedure in an intermediate relational programming language, a translation of that language to relational logic, and an analysis of the logic that exploits recent advances in finite model-finding. Our technique yields concrete counterexamples --- traces of the procedure that violate the specification.The vote-tallying software, used for public elections in the Netherlands, had previously been annotated with specifications in the Java Modeling Language and analyzed with ESC/Java2. Our analysis found counterexamples to the JML contracts, indicating bugs in the code and errors in the specifications that evaded prior analysis.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Preliminary design of JML: a behavioral interface specification language for java JML is a behavioral interface specification language tailored to Java(TM). Besides pre- and postconditions, it also allows assertions to be intermixed with Java code; these aid verification and debugging. JML is designed to be used by working software engineers; to do this it follows Eiffel in using Java expressions in assertions. JML combines this idea from Eiffel with the model-based approach to specifications, typified by VDM and Larch, which results in greater expressiveness. Other expressiveness advantages over Eiffel include quantifiers, specification-only variables, and frame conditions.This paper discusses the goals of JML, the overall approach, and describes the basic features of the language through examples. It is intended for readers who have some familiarity with both Java and behavioral specification using pre- and postconditions.
Specifications are (preferably) executable The validation of software specifications with respect to explicit and implicit user requirements is extremely difficult. To ease the validation task and to give users immediate feedback of the behavior of the future software it was suggested to make specifications executable. However, Hayes and Jones (Hayes, Jones 89) argue that executable specifications should be avoided because executability can restrict the expressiveness of specification languages, and can adversely affect implementations. In this paper I will argue for executable specifications by showing that non-executable formal specifications can be made executable on almost the same level of abstraction and without essentially changing their structure. No new algorithms have to be introduced to get executability. In many cases the combination of property-orientation and search results in specifications based on the generate-and-test approach. Furthermore, I will demonstrate that declarative specification languages allow to combine high expressiveness and executability.
Abstracto 84: The next generation Programming languages are not an ideal vehicle for expressing algorithms. This paper sketches how a language Abstracto might be developed for “algorithmic expressions” that may be manipulated by the rules of “algorithmics”, quite similar to the manipulation of mathematical expressions in mathematics. Two examples are given of “abstract” algorithmic expressions that are not executable in the ordinary sense, but may be used in the derivation of programs. It appears that the notion of “refinement” may be replaced by a weaker notion for abstract algorithmic expressions, corresponding also to a weaker notion of “weakest precondition”.
A new notion of encapsulation Generally speaking, a “module” is used as an “encapsulation mechanism” to tie together a set of declarations of variables and operations upon them. Although there is no standard way to instantiate or use a module, the general idea is that a module describes the implementation of all the values of a given type. We believe that this is too inflexible to provide enough control: one should be able to use different implementations (given by different modules) for variables (and values) of the same type. When incorporated properly into the notation, this finer grain of control allows one to program at a high level of abstraction and then to indicate how various pieces of the program should be implemented. It provides simple, effective access to earlier-written modules, so that they are useable in a more flexible manner than is possible in current notations. It generalizes to provide the ability to indicate structural transformations, in a disciplined fashion, in order to achieve efficiency with respect to time or space. However, the program will still be understood at the abstract level and the transformations or implementations will be looked at only to deal with efficiency concerns. Finally, some so-called “data types” (e.g. stack and subranges of the integers) can more properly be looked upon simply as restricted implementations of more general types (e.g. sequence and integer). Thus, the notion of subtype becomes less important.
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
Specification matching of software components Specification matching is a way to compare two software components, based on descriptions of the component's behaviors. In the context of software reuse and library retrieval, it can help determine whether one component can be substituted for another or how one can be modified to fit the requirements of the other. In the context of object-oriented programming, it can help determine when one type is a behavioral subtype of another. We use formal specifications to describe the behavior of software components and, hence, to determine whether two components match. We give precise definitions of not just exact match, but, more relevantly, various flavors of relaxed match. These definitions capture the notions of generalization, specialization, and substitutability of software components. Since our formal specifications are pre- and postconditions written as predicates in first-order logic, we rely on theorem proving to determine match and mismatch. We give examples from our implementation of specification matching using the Larch Prover.
Distributed Representations of Words and Phrases and their Compositionality. The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.
Metaphors and models: conceptual foundations of representations in interactive systems development When system developers design a computer system (or other information artifact), they must inevitably make judgements as to how to abstract the worksystem and how to represent this abstraction in their designs. In the past, such abstractions have been based either on a traditional philosophy of cognition of cognitive psychology or on intuitive, spontaneous philosophies. A number of recent developments in distributed cognition (Hutchins, 1995), activity theory (Nardi, 1996), and experientialism (Lakoff, 1987) have raised questions about the legitimacy of such philosophies. In this article, we discuss from where the abstractions come that designers employ and how such abstractions are related to the concepts that the users of these systems have. In particular, we use the theory of experientialism or experiential cognition as the foundation for our analysis. Experientialism (Lakoff, 1987) has previously only been applied to human-computer interaction (HCI) design in a quite limited way, yet it deals specifically with issues concerned with categorization and concept formation. We show how the concept of metaphor, derived from experientialism, can be used to understand the strengths and weaknesses of alternative representations in HCI design, how it can highlight changes in the paradigm underlying representations, and how it can be used to consider new approaches to HCI design. We also discuss the role that "mental spaces" have in forming new concepts and designs.
Generating, integrating, and activating thesauri for concept-based document retrieval A blackboard-based document management system that uses a neural network spreading-activation algorithm which lets users traverse multiple thesauri is discussed. Guided by heuristics, the algorithm activates related terms in the thesauri and converges of the most pertinent concepts. The system provides two control modes: a browsing module and an activation module that determine the sequence of operations. With the browsing module, users have full control over which knowledge sources to browse and what terms to select. The system's query formation; the retrieving, ranking and selection of documents; and thesaurus activation are described.<>
DOODLE: a visual language for object-oriented databases In this paper we introduce DOODLE, a new visual and declarative language for object-oriented databases. The main principle behind the language is that it is possible to display and query the database with arbitrary pictures. We allow the user to tailor the display of the data to suit the application at hand or her preferences. We want the user-defined visualizations to be stored in the database, and the language to express all kinds of visual manipulations. For extendibility reasons, the language is object-oriented. The semantics of the language is given by a well-known deductive query language for object-oriented databases. We hope that the formal basis of our language will contribute to the theoretical study of database visualizations and visual query languages, a subject that we believe is of great interest, but largely left unexplored.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.21728
0.016714
0.016714
0.002462
0.000165
0.000031
0
0
0
0
0
0
0
0
A scheme for robust distributed sensor fusion based on average consensus We consider a network of distributed sensors, where each sensor takes a linear measurement of some unknown parameters, corrupted by independent Gaussian noises. We propose a simple distributed iterative scheme, based on distributed average consensus in the network, to compute the maximum-likelihood estimate of the parameters. This scheme doesn't involve explicit point-to-point message passing or routing; instead, it diffuses information across the network by updating each node's data with a weighted average of its neighbors' data (they maintain the same data structure). At each step, every node can compute a local weighted least-squares estimate, which converges to the global maximum-likelihood solution. This scheme is robust to unreliable communication links. We show that it works in a network with dynamically changing topology, provided that the infinitely occurring communication graphs are jointly connected.
Consensus-based algorithms for distributed filtering The paper addresses Distributed State Estimation (DSE) over sensor networks. Two existing consensus approaches for DSE of linear systems, named consensus on information (CI) and consensus on measurements (CM), are extended to nonlinear systems. Further, a novel hybrid consensus approach exploiting both CM and CI (named HCMCI=Hybrid CM + CI) is introduced in order to combine their complementary benefits. Novel theoretical results, limitedly to linear systems, on the guaranteed stability of the HCMCI filter under minimal requirements (i.e. collective observability and network connectivity) are proved. Finally, a simulation case-study is presented in order to comparatively show the effectiveness of the proposed consensus-based state estimators.
The extended Kalman filter as an exponential observer for nonlinear systems In this correspondence, we analyze the behavior of the extended Kalman filter as a state estimator for nonlinear deterministic systems. Using the direct method of Lyapunov, we prove that under certain conditions, the extended Kalman filter is an exponential observer, i.e., the dynamics of the estimation error is exponentially stable, Furthermore, rr-e discuss a generalization of the Kalman filter with exponential data weighting to nonlinear systems.
Distributed Particle Filter Implementation With Intermittent/Irregular Consensus Convergence Motivated by non-linear, non-Gaussian, distributed multi-sensor/agent navigation and tracking applications, we propose a multi-rate consensus/fusion based framework for distributed implementation of the particle filter (CF/DPF). The CF/DPF framework is based on running localized particle filters to estimate the overall state vector at each observation node. Separate fusion filters are designed to consistently assimilate the local filtering distributions into the global posterior by compensating for the common past information between neighboring nodes. The CF/DPF offers two distinct advantages over its counterparts. First, the CF/DPF framework is suitable for scenarios where network connectivity is intermittent and consensus can not be reached between two consecutive observations. Second, the CF/DPF is not limited to the Gaussian approximation for the global posterior density. A third contribution of the paper is the derivation of the exact expression for computing the posterior Cramér–Rao lower bound (PCRLB) for the distributed architecture based on a recursive procedure involving the local Fisher information matrices (FIMs) of the distributed estimators. The performance of the CF/DPF algorithm closely follows the centralized particle filter approaching the PCRLB at the signal to noise ratios that we tested.
Distributed Linear Estimation Over Sensor Networks We consider a network of sensors in which each node may collect noisy linear measurements of some unknown parameter. In this context, we study a distributed consensus diffusion scheme that relies only on bidirectional communication among neighbour nodes (nodes that can communicate and exchange data), and allows every node to compute an estimate of the unknown parameter that asymptotically converges to the true parameter. At each time iteration, a measurement update and a spatial diffusion phase are performed across the network, and a local least-squares estimate is computed at each node. The proposed scheme allows one to consider networks with dynamically changing communication topology, and it is robust to unreliable communication links and failures in measuring nodes. We show that under suitable hypotheses all the local estimates converge to the true parameter value.
Diffusion Strategies for Distributed Kalman Filtering and Smoothing We study the problem of distributed Kalman filtering and smoothing, where a set of nodes is required to estimate the state of a linear dynamic system from in a collaborative manner. Our focus is on diffusion strategies, where nodes communicate with their direct neighbors only, and the information is diffused across the network through a sequence of Kalman iterations and data-aggregation. We study the problems of Kalman filtering, fixed-lag smoothing and fixed-point smoothing, and propose diffusion algorithms to solve each one of these problems. We analyze the mean and mean-square performance of the proposed algorithms, provide expressions for their steady-state mean-square performance, and analyze the convergence of the diffusion Kalman filter recursions. Finally, we apply the proposed algorithms to the problem of estimating and tracking the position of a projectile. We compare our simulation results with the theoretical expressions, and note that the proposed approach outperforms existing techniques.
Extended Kalman Filter Based Learning Algorithm for Type-2 Fuzzy Logic Systems and Its Experimental Evaluation. In this paper, the use of extended Kalman filter for the optimization of the parameters of type-2 fuzzy logic systems is proposed. The type-2 fuzzy logic system considered in this study benefits from a novel type-2 fuzzy membership function which has certain values on both ends of the support and the kernel, and uncertain values on other parts of the support. To have a comparison of the extended K...
Robust H/sub /spl infin// control for linear discrete-time systems with norm-bounded nonlinear uncertainties. This paper studies the problem of robust control of a class of uncertain discrete-time systems. The class of uncertain systems is described by a state-space model with linear nominal parts and norm-bounded nonlinear uncertainties in the state and output equations. The authors address the problem of robust H/sub /spl infin// control in which both robust stability and a prescribed H/sub /spl infin// performance are required to be achieved, irrespective of the uncertainties. It has been shown that instead of the nonlinear uncertain system, one may only consider a related linear uncertain system and thus a linear static state feedback control law is designed, which is in terms of a Riccati inequality.
Stability analysis of systems with aperiodic sample-and-hold devices Motivated by the widespread use of networked and embedded control systems, improved stability conditions are derived for sampled-data feedback control systems with uncertainly time-varying sampling intervals. The results are derived by exploiting the passivity-type property of the operator arising in the input-delay approach to the system in addition to the gain of the operator, and are hence less conservative than existing ones.
Viewpoints: principles, problems and a practical approach to requirements engineering The paper includes a survey and discussion of viewpoint&dash;oriented approaches to requirements engineering and a presentation of new work in this area which has been designed with practical application in mind. We describe the benefits of viewpoint&dash;oriented requirements engineering and describe the strengths and weaknesses of a number of viewpoint&dash;oriented methods. We discuss the practical problems of introducing viewpoint&dash;oriented requirements engineering into industrial software engineering practice and why these have prevented the widespread use of existing approaches. We then introduce a new model of viewpoints called Preview. Preview viewpoints are flexible, generic entities which can be used in different ways and in different application domains. We describe the novel characteristics of the Preview viewpoints model and the associated processes of requirements discovery, analysis and negotiation. Finally, we discuss how well this approach addresses some outstanding problems in requirements engineering (RE) and the practical industrial problems of introducing new requirements engineering methods.
The interdisciplinary study of coordination This survey characterizes an emerging research area, sometimes called coordination theory, that focuses on the interdisciplinary study of coordination. Research in this area uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology.A key insight of the framework presented here is that coordination can be seen as the process of managing dependencies among activities. Further progress, therefore, should be possible by characterizing different kinds of dependencies and identifying the coordination processes that can be used to manage them. A variety of processes are analyzed from this perspective, and commonalities across disciplines are identified. Processes analyzed include those for managing shared resources, producer/consumer relationships, simultaneity constraints, and task/subtask dependencies.Section 3 summarizes ways of applying a coordination perspective in three different domains:(1) understanding the effects of information technology on human organizations and markets, (2) designing cooperative work tools, and (3) designing distributed and parallel computer systems. In the final section, elements of a research agenda in this new area are briefly outlined.
Integrating Action Systems and Z in a Medical System Specification This paper reports on work carried out on formal specification of a computerbasedsystem that is used to train the reaction abilities of patients with severebrain damage. The system contains computer programs by which the patientscarry out different tests that are designed to stimulate their eyes and ears. Systemsof this type are new and no formal specifications for them exists to ourknowledge. The system specified here is developed together with the neurologicalclinic of a Finnish...
Addressing degraded service outcomes and exceptional modes of operation in behavioural models A dependable software system should attempt to at least partially satisfy user goals if full service provision is impossible due to an exceptional situation. In addition, a dependable system should evaluate the effects of the exceptional situation on future service provision and adjust the set of services it promises to deliver accordingly. In this paper we show how to express degraded service outcomes and exceptional modes of operation in behavioural models, i.e. use cases, activity diagrams and state charts. We also outline how to integrate the task of discovering and defining degraded outcomes and exceptional modes of operation into a requirements engineering process by presenting the relevant parts of our dependability-focused requirements engineering process DREP.
On backwards and forwards reachable sets bounding for perturbed time-delay systems Linear systems with interval time-varying delay and unknown-but-bounded disturbances are considered in this paper. We study the problem of finding outer bound of forwards reachable sets and inter bound of backwards reachable sets of the system. Firstly, two definitions on forwards and backwards reachable sets, where initial state vectors are not necessary to be equal to zero, are introduced. Then, by using the Lyapunov-Krasovskii method, two sufficient conditions for the existence of: (i) the smallest possible outer bound of forwards reachable sets; and (ii) the largest possible inter bound of backwards reachable sets, are derived. These conditions are presented in terms of linear matrix inequalities with two parameters need to tuned, which therefore can be efficiently solved by combining existing convex optimization algorithms with a two-dimensional search method to obtain optimal bounds. Lastly, the obtained results are illustrated by four numerical examples.
1.026436
0.02592
0.025675
0.01819
0.012424
0.005734
0.000005
0.000001
0
0
0
0
0
0
Edge Preserving Image Compression Technique using Adaptive Feed Forward Neural Network The aim of the paper is to develop an edge preserving image compression technique using one hidden layer feed forward neural network of which the neurons are determined adaptively. Edge detection and multi-level thresholding operations are applied to reduce the image size significantly. The processed image block is fed as single input pattern while single output pattern has been constructed from the original image unlike other neural network based techniques where multiple image blocks are fed to train the network. The paper proposes initialization of weights between the input and lone hidden layer by transforming pixel coordinates of the input pattern block into its equivalent one-dimensional representation. Initialization process exhibits better rate of convergence of the back propagation training algorithm compare to the randomization of initial weights. The proposed scheme has been demonstrated through several experiments including Lena that show very promising results in compression as well as in reconstructed images over conventional neural network based techniques available in the literature.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A formal approach for the development of reactive systems Context: This paper deals with the development and verification of liveness properties on reactive systems using the Event-B method. By considering the limitation of the Event-B method to invariance properties, we propose to apply the language TLA^+ to verify liveness properties on Event-B models. Objective: This paper deals with the use of two verification approaches: theorem proving and model-checking, in the construction and verification of safe reactive systems. The theorem prover concerned is part of the Click_n_Prove tool associated to the Event-B method and the model checker is TLC for TLA^+ models. Method: To verify liveness properties on Event-B systems, we extend first the expressivity and the semantics of a B model (called temporal B model) to deal with the specification of fairness and eventuality properties. Second, we propose semantics of the extension over traces, in the same spirit as TLA^+ does. Third, we give verification rules in the axiomatic way of the Event-B method. Finally, we give transformation rules from a temporal B model into a TLA^+ module. We present in particular, our prototype system called B2TLA^+, that we have developed to support this transformation; then we can verify liveness properties thanks to the model checker TLC on finite state systems. For the verification of infinite-state systems, we propose the use of the predicate diagrams and its associated tool DIXIT. As the B refinement preserves invariance properties through refinement steps, we propose some rules to get the preservation of liveness properties by the B refinement. Results: The proposed approach is applied for the development of some reactive systems examples and our prototype system B2TLA^+ is successfully used to transform a temporal B model into a TLA^+ module. Conclusion: The paper successfully defines an approach for the specification and verification of safety and liveness properties for the development of reactive systems using the Event-B method, the language TLA^+ and the predicate diagrams with their associated tools. The approach is illustrated on a case study of a parcel sorting system.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Logical Specifications for Functional Programs We present a formal method of functional program development based on step-by-step transformation.
Term transformers: A new approach to state We present a new approach to adding state and state-changing commands to a term language. As a formal semantics it can be seen as a generalization of predicate transformer semantics, but beyond that it brings additional opportunities for specifying and verifying programs. It is based on a construct called a phrase, which is a term of the form C &rtri; t, where C stands for a command and t stands for a term of any type. If R is boolean, C &rtri; R is closely related to the weakest precondition wp(C,R). The new theory draws together functional and imperative programming in a simple way. In particular, imperative procedures and functions are seen to be governed by the same laws as classical functions. We get new techniques for reasoning about programs, including the ability to dispense with logical variables and their attendant complexities. The theory covers both programming and specification languages, and supports unbounded demonic and angelic nondeterminacy in both commands and terms.
Dually nondeterministic functions Nondeterminacy is a fundamental notion in computing. We show that it can be described by a general theory that accounts for it in the form in which it occurs in many programming contexts, among them specifications, competing agents, data refinement, abstract interpretation, imperative programming, process algebras, and recursion theory. Underpinning these applications is a theory of nondeterministic functions; we construct such a theory. The theory consists of an algebra with which practitioners can reason about nondeterministic functions, and a denotational model to establish the soundness of the theory. The model is based on the idea of free completely distributive lattices over partially ordered sets. We deduce the important properties of nondeterministic functions.
A theory of bunches A bunch is a simple data structure, similar in many respects to a set. However, bunches differ from sets in that the data is not packaged up or encapsulated, and in particular in that a bunch consisting of one element is the same as that element. Bunches are attractive for handling nondeterminacy and underspecification, by which is meant that for any particular input to the program or specification, the associated output is not fully determined. The acceptable outputs for any given input can be described by a bunch. This approach nicely generalises traditional single-output programs and specifications. We present a formal theory of bunches. It includes an axiomatisations of boolean and function types whose behaviour is well-known to be complicated by the presence of nondeterminacy. The axiomatisation of the booleans preserves most of the laws of classical predicate calculus. The axiomatisation of functions accommodates higher-order functions in all their generality, while avoiding the dangers of inconsistency when functions and nondeterminacy intermix. Our theory is presented as a Hilbert-style system of axioms and inference rules for a small specification language: We prove consistency.
A Weaker Precondition for Loops
Abstracto 84: The next generation Programming languages are not an ideal vehicle for expressing algorithms. This paper sketches how a language Abstracto might be developed for “algorithmic expressions” that may be manipulated by the rules of “algorithmics”, quite similar to the manipulation of mathematical expressions in mathematics. Two examples are given of “abstract” algorithmic expressions that are not executable in the ordinary sense, but may be used in the derivation of programs. It appears that the notion of “refinement” may be replaced by a weaker notion for abstract algorithmic expressions, corresponding also to a weaker notion of “weakest precondition”.
Formalising java's data race free guarantee We formalise the data race free (DRF) guarantee provided by Java, as captured by the semi-formal Java Memory Model (JMM) [1] and published in the Java Language Specification [2]. The DRF guarantee says that all programs which are correctly synchronised (i.e., free of data races) can only have sequentially consistent behaviours. Such programs can be understood intuitively by programmers. Formalisation has achieved three aims. First, we made definitions and proofs precise, leading to a better understanding; our analysis found several hidden inconsistencies and missing details. Second, the formalisation lets us explore variations and investigate their impact in the proof with the aim of simplifying the model; we found that not all of the anticipated conditions in the JMM definition were actually necessary for the DRF guarantee. This allows us to suggest a quick fix to a recently discovered serious bug [3] without invalidating the DRF guarantee. Finally, the formal definition provides a basis to test concrete examples, and opens the way for future work on JMM-aware logics for concurrent programs.
Where Do Operations Come From? A Multiparadigm Specification Technique We propose a technique to help people organize and write complex specifications, exploiting the best features of several different specification languages. Z is supplemented, primarily with automata and grammars, to provide a rigorous and systematic mapping from input stimuli to convenient operations and arguments for the Z specification. Consistency analysis of the resulting specificaiton is based on the structural rules. The technique is illustrated by two examples, a graphical human-computer interface and a telecommunications system.
Action Systems with Synchronous Communication this paper show that a simple extension of the action systems framework,adding procedure declarations to action systems, will give us a very general mechanism forsynchronized communication between action systems. Both actions and procedure bodiesare guarded commands. When an action in one action system calls a procedure in anotheraction system, the eoeect is that of a remote procedure call. The calling action and theprocedure body involved in the call are executed as a single atomic...
Safeware: system safety and computers
Applying the SCR requirements method to a weapons control panel: an experience report
Normalized averaging using adaptive applicability functions with applications in image reconstruction from sparsely and randomly sampled data In this paper we describe a new strategy for using local structure adaptive filtering in normalized convolution. The shape of the filter, used as the applicability function in the context of normalized convolution, adapts to the local image structure and avoids filtering across borders. The size of the filter is also adaptable to the local sample density to avoid unnecessary smoothing over high certainty regions. We compared our adaptive interpolation technique with conventional normalized averaging methods. We found that our strategy yields a result that is much closer to the original signal both visually and in terms of MSE, meanwhile retaining sharpness and improving the SNR.
Trade-Off Analysis For Requirements Selection Evaluation, prioritization and selection of candidate requirements are of tremendous importance and impact for subsequent software development. Effort, time as well as quality constraints have to be taken into account. Typically, different stakeholders have conflicting priorities and the requirements of all these stakeholders have to be balanced in an appropriate way to ensure maximum value of the final set of requirements. Tradeoff analysis is needed to proactively explore the impact of certain decisions in terms of all the criteria and constraints.The proposed method called Quantitative WinWin uses an evolutionary approach to provide support for requirements negotiations. The novelty of the presented idea is four-fold. Firstly, it iteratively uses the Analytical Hierarchy Process (AHP) for a step-wise analysis with the aim to balance the stakeholders' preferences related to different classes of requirements. Secondly, requirements selection is based on predicting and rebalancing its impact on effort, time and quality. Both prediction and rebalancing uses the simulation model prototype GENSIM. Thirdly, alternative solution sets offered for decision-making are developed incrementally based on thresholds for the degree of importance of requirements and heuristics to find a best fit to constraints. Finally, trade-off analysis is used to determine non-dominated extensions of the maximum value that is achievable under resource and quality constraints. As a main result, quantitative WinWin proposes a small number of possible sets of requirements from which the actual decision-maker can finally select the most appropriate solution.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.054985
0.058228
0.02757
0.018346
0.004778
0.002068
0.000817
0.00004
0.000008
0.000001
0
0
0
0
Scenario inspections Scenarios help practitioners to better understand the requirements of a software system as well as its interface with the environment. However, despite their widespread use both by object-oriented development teams and human–computer interface designers, scenarios are being built in a very ad-hoc way. Departing from the requirements engineering viewpoint, this article shows how inspections help software developers to better manage the production of scenarios. We used Fagan’s inspections as the main paradigm in the design of our proposed process. The process was applied to case studies and data were collected regarding the types of problems as well as the effort to find them.
A Scenario Construction Process use cases should evolve fromconcrete use cases, not the other way round. Extendsassociation let us capture the functional requirements ofa complex system, in the same way we learn about anynew subject: First we understand the basic functions,then we introduce complexity."Gough et al. [28] follow an approach closer to the oneproposed in this article regarding their heuristics:`1. Creation of natural language documents: projectscope documents, customer needs documents, serviceneeds...
The world?s a stage: a survey on requirements engineering using a real-life case study In this article we present a survey on the area of Requirements Engineering anchored on the analysis of a real life case study, the London Ambulance Service (56). We aim at bringing to context new methods, techniques and tools that should be of help to both reaserchers and practitioners. The case study in question is of special interest in that it is available to the public and deals with a very large system, of which the software system is only a part of. The survey is divided into four topics of interest: viewpoints, social aspects, evolution and non-functional requirements. This division resulted from the work method adopted by the authors. Our main goal is to bridge recent findings in Requirements Engineering research to a real world problem. In this light, we believe this article to be an important educational device.
A Cost-Value Approach for Prioritizing Requirements Deciding which requirements really matter is a difficult task and one increasingly demanded because of time and budget constraints. The authors developed a cost-value approach for prioritizing requirements and applied it to two commercial projects.
Structured analysis for requirements definition The next article, by Ross and Schoman, is one of three papers chosen for inclusion in this book that deal with the subject of structured analysis. With its companion papers --- by Teichroew and Hershey [Paper 23] and by DeMarco [Paper 24] --- the paper gives a good idea of the direction that the software field probably will be following for the next several years. The paper addresses the problems of traditional systems analysis, and anybody who has spent any time as a systems analyst in a large EDP organization immediately will understand the problems and weaknesses of "requirements definition" that Ross and Schoman relate --- clearly not the sort of problems upon which academicians like Dijkstra, Wirth, Knuth, and most other authors in this book have focused! To stress the importance of proper requirements definition, Ross and Schoman state that "even the best structured programming code will not help if the programmer has been told to solve the wrong problem, or, worse yet, has been given a correct description, but has not understood it." In their paper, the authors summarize the problems associated with conventional systems analysis, and describe the steps that a "good" analysis approach should include. They advise that the analyst separate his logical, or functional description of the system from the physical form that it eventually will take; this is difficult for many analysts to do, since they assume, a priori, that the physical implementation of the system will consist of a computer. Ross and Schoman also emphasize the need to achieve a consensus among typically disparate parties: the user liaison personnel who interface with the developers, the "professional" systems analyst, and management. Since all of these people have different interests and different viewpoints, it becomes all the more important that they have a common frame of reference --- a common way of modeling the system-to-be. For this need, Ross and Schoman propose their solution" a proprietary package, known as SADT, that was developed by the consulting firm of SofTech for which the authors work. The SADT approach utilizes a top-down, partitioned, graphic model of a system. The model is presented in a logical, or abstract, fashion that allows for eventual implementation as a manual system, a computer system, or a mixture of both. This emphasis on graphic models of a system is distinctly different from that of the Teichroew and Hershey paper. It is distinctly similar to the approach suggested by DeMarco in "Structured Analysis and System Specification," the final paper in this collection. The primary difference between DeMarco and Ross/Schoman is that DeMarco and his colleagues at YOURI]DN inc. prefer circles, or "bubbles," whereas the SofTech group prefers rectangles. Ross and Schoman point out that their graphic modeling approach can be tied in with an "automated documentation" approach of the sort described by Teichroew and Hershey. Indeed, this approach gradually is beginning to be adopted by large EDP organizations; but for installations that can't afford the overhead of a computerized, automated systems analysis package, Ross and Schoman neglect one important aspect of systems modeling. That is the "data dictionary," in which all of the data elements pertinent to the new system are defined in the same logical top-down fashion as the rest of the model There also is a need to formalize mini-specifications, or "mini-specs" as DeMarco calls them; that is, the "business policy" associated with each bottom-level functional process of the system must be described in a manner far more rigorous than currently is being done. A weakness of the Ross/Schoman paper is its lack of detail about problem solutions: More than half the paper is devoted to a description of the problems of conventional analysis, but the SADT package is described in rather sketchy detail. There are additional documents on SADT available from SofTech, but the reader still will be left with the fervent desire that Messrs. Ross and Schoman and their colleagues at SofTech eventually will sit down and put their ideas into a full-scale book.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research.
Petri nets: Properties, analysis and applications Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets
Design problem solving: a task analysis I propose a task structure for design by analyzing a general class of methods that I call propose- critique-modify methods. The task structure is constructed by identifying a range of methods for each task. For each method, the knowledge needed and the subtasks that it sets up are iden- tified. This recursive style of analysis provides a framework in which we can understand a number of particular proposals for design prob- lem solving as specific combinations of tasks, methods, and subtasks. Most of the subtasks are not really specific to design as such. The analy- sis shows that there is no one ideal method for design, and good design problem solving is a result of recursively selecting methods based on a number of criteria, including knowledge avail- ability. How the task analysis can help in knowledge acquisition and system design is dis- cussed.
WebWork: METEOR2's Web-Based Workflow Management System. METEOR workflow management systems consist of both (1) design/build-time and (2) run-time/enactment components for implementing workflow applications. An enactment system provides the command, communication and control for the individual tasks in the workflow. Tasks are the run-time instances of intra- or inter-enterprise applications. We are developing three implementations of the METEOR model: WebWork, OrbWork and NeoWork. This paper discusses WebWork, an implementation relying solely on Web technology as the infrastructure for the enactment system. WebWork supports a distributed implementation with participation of multiple Web servers. It also supports automatic code generation of workflow applications from design specifications produced by a comprehensive graphical designer. WebWork has been developed as a complement of its more heavyweight counterparts (OrbWork and NeoWork), with the goal of providing ease of workflow application development, installation, use and maintenance. At the time of this writing, WebWork has been installed by several of the LSDIS Lab's industrial partners for testing, evaluation and building workflow applications.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.066667
0.066667
0.028571
0.001493
0
0
0
0
0
0
0
0
0
Time-Series Forecasting via Fuzzy-Probabilistic Approach With Evolving Clustering-Based Granulation Time-series prediction based on information granule in which the algorithm is developed by deriving the relations existing in the granular time series, has achieved excellent success. However, the existing uncertainty in data and the computational demand of the granulation process make it difficult for these methods to accurately and efficiently achieve long-term prediction. In this article, a fuzzy-probabilistic prediction approach with evolving clustering-based granulation is proposed. First, the evolving clustering-based granulation strategy is proposed to transform the original numerical data into information granules. The granulation process is performed in an incremental way and the information granules are represented with the triplets, which can efficiently reduce the computation overhead. Then, the proposed information granule clustering is used to derive the group relations in the information granules. Based on the logical relationships of information granules in the temporal order, the information granule forecasting the integrated fuzzy and probability theory is proposed to deal with uncertainties and perform final long-term prediction. A series of experiments using publicly available time series are conducted, and the comparative analysis demonstrates that the proposed approach can achieve a better performance for regular and Big Data time series than the existing granular and numeric models for long-term prediction.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Network Topology and a Case Study in TCOZ Object-Z is strong in modeling the data and operations of complex systems.However, it is weak in specifying real-time and concurrent systems.The Timed Communicating Object-Z (TCOZ) extends Object-Z notation withTimed CSP's constructs. TCOZ is particularly well suited for specifying complexsystems whose components have their own thread of control. This paperdemonstrates expressiveness of the TCOZ notation through a case study onspecifying a multi-lift system that operates in real-time.1...
A case-study in timed refinement: a mine pump A specification and top-level refinement of a simple mine pump control system, as well as a proof of correctness of the refinement, are presented as an example of the application of a formal method for the development of time-based systems. The overall approach makes use of a refinement calculus for timed systems, similar to the refinement calculi for sequential programs. The specification makes use of topologically continuous functions of time to describe both analog and discrete properties of both the system and its refinements. The basic building block of specifications is a specification statement that gives a clear separation between the specification of the assumptions that the system may make about the environment in which it is to be placed, and the effect the system is guaranteed to achieve if placed in such an environment. The top-level refinement of the system is developed by application of refinement laws that allow design decisions to be made, local state to be introduced, and the decomposition of systems into pipelined and/or parallel processes.
TRIO: A logic language for executable specifications of real-time systems We motivate the need for a formal specification language for real-time applications and for a support environment providing tools for reasoning about formal specifications. Then we introduce TRIO, a logic-based specification language. TRIO is first introduced informally through examples. Then a formal declarative semantics is provided, which can accommodate a variety of underlying time structures. Finally, the problem of executing TRIO formal specifications is discussed, and a solution is presented.
Maintaining hierarchical graph views We formalize the problem of maintaining views of graphs. These are graphs induced by the contraction of vertex subsets that are defined by associated hierarchies. We provide data structures that allow applications to refine and coarsen such views interactively and efficiently, in time linear in the number of changes induced by any exploration operation. The problem is motivated by applications in graph visualization.
Visualization of structural information: automatic drawing of compound digraphs An automatic method for drawing compound digraphs that contain both inclusion edges and adjacency edges are presented. In the method vertices are drawn as rectangles (areas for texts, images, etc.), inclusion edges by the geometric inclusion among the rectangles, and adjacency edges by arrows connecting them. Readability elements such as drawing conventions and rules are identified, and a heuristic algorithm to generate readable diagrams is developed. Several applications are shown to demonstrate the effectiveness of the algorithm. The utilization of curves to improve the quality of diagrams is investigated. A possible set of command primitives for progressively organizing structures within this graph formalism is discussed. The computational time for the applications shows that the algorithm achieves satisfactory performance
Degrees of acyclicity for hypergraphs and relational database schemes Database schemes (winch, intuitively, are collecuons of table skeletons) can be wewed as hypergraphs (A hypergraph Is a generalization of an ordinary undirected graph, such that an edge need not contain exactly two nodes, but can instead contain an arbitrary nonzero number of nodes.) A class of "acychc" database schemes was recently introduced. A number of basic desirable propemes of database schemes have been shown to be equivalent to acyclicity This shows the naturalness of the concept. However, unlike the situation for ordinary, undirected graphs, there are several natural, noneqmvalent notions of acyclicity for hypergraphs (and hence for database schemes). Various desirable properties of database schemes are constdered and it is shown that they fall into several equivalence classes, each completely characterized by the degree of acycliclty of the scheme The results are also of interest from a purely graph-theoretic viewpomt. The original notion of aeyclicity has the countermtmtive property that a subhypergraph of an acychc hypergraph can be cyclic. This strange behavior does not occur for the new degrees of acyelicity that are considered.
A Graph-Based Data Model and its Ramifications Currently, database researchers are investigating new data models in order to remedy the deficiences of the flat relational model when applied to nonbusiness applications. Herein we concentrate on a recent graph-based data model called the hypernode model. The single underlying data structure of this model is the hypernode which is a digraph with a unique defining label. We present in detail the three components of the model, namely its data structure, the hypernode, its query and update language, called HNQL, and its provision for enforcing integrity constraints. We first demonstrate that the said data model is a natural candidate for formalising hypertext. We then compare it with other graph-based data models and with set-based data models. We also investigate the expressive power of HNQL. Finally, using the hypernode model as a paradigm for graph-based data modelling, we show how to bridge the gap between graph-based and set-based data models, and at what computational cost this can be done.
Formal verification for fault-tolerant architectures: prolegomena to the design of PVS PVS is the most recent in a series of verification systems developed at SRI. Its design was strongly influenced, and later refined, by our experiences in developing formal specifications and mechanically checked verifications for the fault-tolerant architecture, algorithms, and implementations of a model 驴reliable computing platform驴 (RCP) for life-critical digital flight-control applications, and by a collaborative project to formally verify the design of a commercial avionics processor called AAMP5. Several of the formal specifications and verifications performed in support of RCP and AAMP5 are individually of considerable complexity and difficulty. But in order to contribute to the overall goal, it has often been necessary to modify completed verifications to accommodate changed assumptions or requirements, and people other than the original developer have often needed to understand, review, build on, modify, or extract part of an intricate verification. In this paper, we outline the verifications performed, present the lessons learned, and describe some of the design decisions taken in PVS to better support these large, difficult, iterative, and collaborative verifications.
Scale & Affine Invariant Interest Point Detectors In this paper we propose a novel approach for detecting interest points invariant to scale and affine transformations. Our scale and affine invariant detectors are based on the following recent results: (1) Interest points extracted with the Harris detector can be adapted to affine transformations and give repeatable results (geometrically stable). (2) The characteristic scale of a local structure is indicated by a local extremum over scale of normalized derivatives (the Laplacian). (3) The affine shape of a point neighborhood is estimated based on the second moment matrix.Our scale invariant detector computes a multi-scale representation for the Harris interest point detector and then selects points at which a local measure (the Laplacian) is maximal over scales. This provides a set of distinctive points which are invariant to scale, rotation and translation as well as robust to illumination changes and limited changes of viewpoint. The characteristic scale determines a scale invariant region for each point. We extend the scale invariant detector to affine invariance by estimating the affine shape of a point neighborhood. An iterative algorithm modifies location, scale and neighborhood of each point and converges to affine invariant points. This method can deal with significant affine transformations including large scale changes. The characteristic scale and the affine shape of neighborhood determine an affine invariant region for each point.We present a comparative evaluation of different detectors and show that our approach provides better results than existing methods. The performance of our detector is also confirmed by excellent matching results&semi; the image is described by a set of scale/affine invariant descriptors computed on the regions associated with our points.
Specifying dynamic support for collaborative work within WORLDS In this paper, we present a specification language developed for WORLDS, a next generation computer-supported collaborative work system. Our specification language, called Introspect, employs a meta-level architecture to allow run-time modifications to specifications. We believe such an architecture is essential to WORLDS' ability to provide dynamic support for collaborative work in an elegant fashion.
Statecharts in the making: a personal account This paper is a highly personal and subjective account of how the language of statecharts came into being. The main novelty of the language is in being a fully executable visual formalism intended for capturing the behavior of complex real-world systems, and an interesting aspect of its history is that it illustrates the advantages of theoreticians venturing out into the trenches of the real world, "dirtying their hands" and working closely with the system's engineers. The story is told in a way that puts statecharts into perspective and discusses the role of the language in the emergence of broader concepts, such as visual formalisms in general, reactive systems, model-driven development, model executability and code generation.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.22
0.02
0.012222
0.003333
0.0004
0.000032
0.000013
0
0
0
0
0
0
0
Formal Derivation of CSP Programs From Temporal Specifications . The algebra of relations has been very successful for reasoningabout possibly non-deterministic programs, provided their behaviourcan be fully characterized by just their initial and final states. We usea slight generalization, called sequential algebra, to extend the scope ofrelation-algebraic methods to reactive systems, where the behaviour betweeninitiation and termination is also important. To illustrate thisapproach, we integrate Communicating Sequential Processes and linear...
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Automatically generating test data from a Boolean specification This paper presents a family of strategies for automatically generating test data for any implementation intended to satisfy a given specification that is a Boolean formula. The fault detection effectiveness of these strategies is investigated both analytically and empirically, and the costs, assessed in terms of test set size, are compared.
Static Analysis to Identify Invariants in RSML Specifications . Static analysis of formal, high-level specifications of safetycritical software can discover flaws in the specification that would escapeconventional syntactic and semantic analysis. As an example, specificationswritten in the Requirements State Machine Language (RSML)should be checked for consistency : two transitions out of the same statethat are triggered by the same event should have mutually exclusiveguarding conditions. The check uses only behavioral information that islocal to...
Automatic synthesis of SARA design models from system requirements In this research in design automation, two views are employed as the requirements of a system-namely, the functional requirements and the operations concept. A requirement analyst uses data flow diagrams and system verification diagrams (SVDs) to represent the functional requirements and the operations concept, respectively. System Architect's Apprentice (SARA) is an environment-supported method for designing hardware and software systems. A knowledge-based system, called the design assistant, was built to help the system designer to transform requirements stated in one particular collection of design languages. The SVD requirement specification features and the SARA design models are reviewed. The knowledge-based tool for synthesizing a particular domain of SARA design from the requirements is described, and an example is given to illustrate this synthesis process. This example shows the rules used and how they are applied. An evaluation of the approach is given.
Validating Requirements for Fault Tolerant Systems using Model Checking Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system. One was an error in the detailed requirements, and the other two were missing/ ambiguous requirements. Because the method allows validation of partial specifications, it is also an effective approach for maintaining fidelity between a co-evolving specification and an implementation.
Interactive verification of knowledge-based systems The Validator program, which interactively checks the consistency and completeness of a knowledge base, is discussed. Validator verifies and validates rule-based expert systems and guarantees that every element in the knowledge base is accessible and essential to the system. The program checks for syntactic errors, unused rules, facts, and questions, incorrectly used legal values, redundant constructs, rules that use illegal values, wrong instantiations, and multiple methods for obtaining values for expressions.<>
Behavioural Conflicts in a Causal Specification Inconsistencies may arise in the course of specification of systems, and it is now recognised that they cannot be forbidden. Recent work has concentrated on enabling requirements descriptions to tolerate inconsistency and on proposing notations that permit inconsistency in specifications. We approach the subject by examining the use of an existing causal language, which is used as a means of specifying the behaviour of systems, to specify, identify and resolve behavioural inconsistencies. This paper is an exploration of the kinds of inconsistency that can arise in a causal specification, how they can be discovered and how they can be resolved. We distinguish between inconsistencies in the structure of a specification, which are assumed to have been removed previously, andinconsistencies in behaviour which, being dynamic in nature, we describe as conflicts.Our approach concentrates on the identification of conflicts in the specified behaviour of a system. After summarising the causal language, we describe a classification of behavioural conflicts and how they can be identified. We discuss possible methods of resolution, and propose a simple process to aid the identification and resolution of conflicts. A case study using the causal language illustrates our approach.
GRAIL/KAOS: An Environment for Goal-Driven Requirements Analysis, Integration and Layout The KAOS methodology provides a language, a method, and meta-level knowledge for goal-driven requirements elaboration. The language provides a rich ontology for capturing requirements in terms of goals, constraints, objects, actions, agents etc. Links between requirements are represented its well to capture refinements, conflicts, operationalizations, responsibility assignments, etc. The KAOS specification language is a multi-paradigm language with a two-level structure: an outer semantic net layer for declaring concepts, their attributes and links to other concepts, and an inner formal assertion layer for formally defining the concept. The latter combines a real-time temporal logic for the specification of goals, constraints, and objects, and standard pre-/postconditions for the specification of actions and their strengthening to ensure the constraints
Qualitative simulation Qualitative simulation is a key inference process in qualitative causal reasoning. However, the precise meaning of the different proposals and their relation with differential equations is often unclear. In this paper, we present a precise definition of qualitative structure and behavior descriptions as abstractions of differential equations and continuously differentiable functions. We present a new algorithm for qualitative simulation that generalizes the best features of existing algorithms, and allows direct comparisons among alternate approaches. Starting with a set of constraints abstracted from a differential equation, we prove that the OSIM algorithm is guaranteed to produce a qualitative behavior corresponding to any solution to the original equation. We also show that any qualitative simulation algorithm will sometimes produce spurious qualitative behaviors: ones which do not correspond to any mechanism satisfying the given constraints. These observations suggest specific types of care that must be taken in designing applications of qualitative causal reasoning systems, and in constructing and validating a knowledge base of mechanism descriptions.
Requirements and Specification Exemplars Specification exemplars are familiar to most software engineeringresearchers. For instance, many will have encountered the well knownlibrary and lift problem statements, and will have seen one or morepublished specifications. Exemplars may serve several purposes: todrive and communicate individual research advances; to establishresearch agendas and to compare and contrast alternative approaches;and, ultimately, to lead to advances in software developmentpractices.Because of their prevalence in the literature, exemplars are worthcritical study. In this paper we consider the purposes that exemplarsmay serve, and explore the incompatibilities inherent in trying toserve several of them at once. Researchers should therefore be clearabout what successfully handling an exemplar demonstrates. We go onto examine the use of exemplars not only for writing specifications(an end product of requirements engineering), but also for therequirements engineering process itself. In particular, requirementsfor good requirements exemplars are suggested and ways of obtainingsuch exemplars are discussed.
Telos: representing knowledge about information systems We describe Telos, a language intended to support the development of information systems. The design principles for the language are based on the premise that information system development is knowledge intensive and that the primary responsibility of any language intended for the task is to be able to formally represent the relevent knowledge. Accordingly, the proposed language is founded on concepts from knowledge representations. Indeed, the language is appropriate for representing knowledge about a variety of worlds related to a particular information system, such as the subject world (application domain), the usage world (user models, environments), the system world (software requirements, design), and the development world (teams, metodologies).We introduce the features of the language through examples, focusing on those provided for desribing metaconcepts that can then be used to describe knowledge relevant to a particular information system. Telos' fetures include an object-centered framework which supports aggregation, generalization, and classification; a novel treatment of attributes; an explicit representation of time; and facilities for specifying integrity constraints and deductive rules. We review actual applications of the language through further examples, and we sketch a formalization of the language.
Connections in acyclic hypergraphs We demonstrate a sense in which the equivalence between blocks (subgraphs without articulation points) and biconnected components (subgraphs in which there are two edge-disjoint paths between any pair of nodes) that holds in ordinary graph theory can be generalized to hypergraphs. The result has an interpretation for relational databases that the universal relations described by acyclic join dependencies are exactly those for which the connections among attributes are defined uniquely. We also exhibit a relationship between the process of Graham reduction (Graham, 1979) of hypergraphs and the process of tableau reduction (Aho, Sagiv and Ullman, 1979) that holds only for acyclic hypergraphs.
Two-dimensional PCA: a new approach to appearance-based face representation and recognition. In this paper, a new technique coined two-dimensional principal component analysis (2DPCA) is developed for image representation. As opposed to PCA, 2DPCA is based on 2D image matrices rather than 1D vectors so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices, and its eigenvectors are derived for image feature extraction. To test 2DPCA and evaluate its performance, a series of experiments were performed on three face image databases: ORL, AR, and Yale face databases. The recognition rate across all trials was higher using 2DPCA than PCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2DPCA than PCA.
Argos: an automaton-based synchronous language Argos belongs to the family of synchronous languages, designed for programming reactive systems: Lustre (Proceedings of the 14th Symposium on Principles of Programming Languages, Munich, 1987; Proc. IEEE 79(9) (1999) 1305), Esterel (Sci. Comput. Programming 19(2) (1992) 87), Signal (Technical Report, IRISA Report 246, IRISA, Rennes, France, 1985). Argos is a set of operators that allow to combine Boolean Mealy machines, in a compositional way. It takes its origin in Statecharts (Sci. Comput. Programming 8 (1987) 231), but with the Argos operators, one can build only a subset of Statecharts, roughly those that do not make use of multi-level arrows. We explain the main motivations for the definition of Argos, and the main differences with Statecharts and their numerous semantics. We define the set of operators, give them a perfectly synchronous semantics in the sense of Esterel, and prove that it is compositional, with respect to the trace equivalence of Boolean Mealy machines. We give an overview of the work related to the definition and implementation of Argos (code generation, connection to verification tools, introduction of non-determinism, etc.). This paper also gives a set of guidelines for building an automaton-based, Statechart-like, yet perfectly synchronous, language.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.057222
0.041408
0.040708
0.040168
0.040168
0.040168
0.020518
0.013649
0.006133
0.000212
0.000005
0
0
0
Non-separable four-dimensional integer wavelet transform with reduced rounding noise. Since few decades ago, Discrete Cosine Transform (DCT) based digital image signal compression had been adopted as the JPEG international standard. Later, Wavelet Transform (WT) has replaced the DCT and its being applied in medical image compression. JPEG 2000, the international standardization of WT is using separable lifting structure where the multidimensional image signal is transformed separately in its horizontal and vertical direction. Besides that, each process is realized by cascading in lifting calculation. However, the necessity of waiting for previous step before calculating to the next step will make the overall delay time become longer. The delay time between input and output of WT is reduced as the proposed method reduces its lifting steps. Since the lifting step contains a rounding operation, variance of the rounding noise generated due to the rounding operation inside the transform is reduced. In this paper, unlike the conventional separable structure, the proposed non-separable structure reduces the rounding noise inside the transform, which will lead to the increasing of coding performance. The proposed wavelet transform has a merit that its output signal, apart from the rounding noise, is exactly the same as the conventional separable structure which is a cascade of 1D structure. As a result of experiments, it was observed that the proposed method reduces the rounding noise as well as increases the performance of data compression of various 4D input signals.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Object-Oriented Real Time Systems Modeling and Verification An object-oriented real time systems conceptual modeling approach is described. In this approach, each object is specified by an object type, consisting of supertypes (inheritance), component types (aggregation), attributes, operations, static constraints, and timed temporal constraints. An object type specification defines a theory of a type of objects. In particular, the static constraints define the valid states of the objects, the operations define the valid state transitions each consisting of a set of execution rules. Each execution rule consists of a precondition and a postcondition. The timed temporal constraints define the permissible sequences of state transitions. Atomic and composite object state diagrams (AOSD and COSD) are then constructed from a formal specification for verification of the satisfiability of the timed temporal constraints.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Oriented graph coloring An oriented k -coloring of an oriented graph G (that is a digraph with no cycle of length 2) is a partition of its vertex set into k subsets such that (i) no two adjacent vertices belong to the same subset and (ii) all the arcs between any two subsets have the same direction. We survey the main results that have been obtained on oriented graph colorings.
The Fixing Block Method in Combinatorics on Words We give an overview of the method of fixing blocks introduced by Shelton. We apply the method to words which are nonrepetitive up to mod k.
Circular words avoiding patterns We introduce the study of circular words avoiding patterns. We prove that there are circular binary cube-free words of every length and present several open problems regarding circular words avoiding more general patterns.
Non-repetitive colorings of infinite sets In this paper we investigate colorings of sets avoiding similarly colored subsets. If S is an arbitrary colored set and J is a fixed family of bijections of S to itself, then two subsets A, B ⊆ S are said to be colored similarly with respect to J, if there exists a transformation t ∈ J mapping A onto B, and preserving a coloring of A. This general setting covers some well-known topics such as non-repetitive sequences of Thue or the famous Hadwiger-Nelson problem on unit distances in Euclidean spaces. Our main theorem of this paper concerns arbitrary infinite sets, however, the most striking consequences are obtained for the case of Euclidean spaces. For instance, there exist 2-colorings of Rn with no two different line segments colored similarly, with respect to translations. The method is based on the principle of induction, hence it is not constructive in general, and the problem of explicit constructions arises naturally. We give two such examples of non-repetitive colorings of the sets R and Q, with respect to translations. In conclusion of the paper we discuss possible generalizations and pose two open problems.
Non-Repetitive Tilings In 1906 Axel Thue showed how to construct an innite non-repetitive (or square- free) word on an alphabet of size 3. Since then this result has been rediscovered many times and extended in many ways. We present a two-dimensional version of this result. We show how to construct a rectangular tiling of the plane using 5 symbols which has the property that lines of tiles which are horizontal, vertical or have slope +1 or 1 contain no repetitions. As part of the construction we introduce a new type of word, one that is non-repetitive up to mod k ,w hich is of interest in itself. We also indicate how our results might be extended to higher dimensions.
Nonrepetitive colorings of graphs A sequence a = a1a2. . . . an is said to be nonrepetitive if no two adjacent blocks of a are exactly the same. For instance, the sequence 1232321 contains a repetition 2323, while 123132123213 is nonrepetitive. A theorem of Thue asserts that, using only three symbols, one can produce arbitrarily long nonrepetitive sequences. In this paper we consider a natural generalization of Thue's sequences for colorings of graphs. A coloring of the set of edges of a given graph G is nonrepetitive if the sequence of colors on any path in G is nonrepetitive. We call the minimal number of colors needed for such a coloring the Thue number of G and denote it by π(G). The main problem we consider is the relation between the numbers π(G) and Δ(G). We show, by an application of the Lovász Local Lemma, that the Thue number stays bounded for graphs with bounded maximum degree, in particular, π(G) ≤ cΔ(G)2 for some absolute constant c. For certain special classes of graphs we obtain linear upper bounds on π(G), by giving explicit colorings. For instance, the Thue number of the complete graph Kn is at most 2n - 3, and π(T) ≤ 4(Δ(T - 1) for any tree T with at least two edges. We conclude by discussing some generalizations and proposing several problems and conjectures.
Formal Derivation of Strongly Correct Concurrent Programs. Summary  A method is described for deriving concurrent programs which are consistent with the problem specifications and free from deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary variables is also given. The applicability of the techniques presented is discussed through various examples; their use for verification purposes is illustrated as well.
The lattice of data refinement We define a very general notion of data refinement which comprises the traditionalnotion of data refinement as a special case. Using the concepts of duals and adjoints wedefine converse commands and a find a symmetry between ordinary data refinement and adual (backward) data refinement. We show how ordinary and backward data refinementare interpreted as simulation and we derive rules for the piecewise data refinement ofprograms. Our results are valid for a general language, covering...
Logarithmical hopping encoding: a low computational complexity algorithm for image compression LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber-Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O(n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Class-based n-gram models of natural language We address the problem of predicting a word from previous words in a sample of text. In particular, we discuss n-gram models based on classes of words. We also discuss several statistical algorithms for assigning words to classes based on the frequency of their co-occurrence with other words. We find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings, depending on the nature of the underlying statistics.
Reflection and semantics in LISP
Navigating hierarchically clustered networks through fisheye and full-zoom methods Many information structures are represented as two-dimensional networks (connected graphs) of links and nodes. Because these network tend to be large and quite complex, people often perfer to view part or all of the network at varying levels of detail. Hierarchical clustering provides a framework for viewing the network at different levels of detail by superimposing a hierarchy on it. Nodes are grouped into clusters, and clusters are themselves place into other clusters. Users can then navigate these clusters until an appropiate level of detail is reached. This article describes an experiment comparing two methods for viewing hierarchically clustered networks. Traditional full-zoom techniques provide details of only the current level of the hierarchy. In contrast, fisheye views, generated by the “variable-zoom” algorithm described in this article, provide information about higher levels as well. Subjects using both viewing methods were given problem-solving tasks requiring them to navigate a network, in this case, a simulated telephone system, and to reroute links in it. Results suggest that the greater context provided by fisheye views significantly improved user performance. Users were quicker to complete their task and made fewer unnecessary navigational steps through the hierarchy. This validation of fisheye views in important for designers of interfaces to complicated monitoring systems, such as control rooms for supervisory control and data acquistion systems, where efficient human performance is often critical. However, control room operators remained concerned about the size and visibility tradeoffs between the fine room operators remained concerned about the size and visibility tradeoffs between the fine detail provided by full-zoom techniques and the global context supplied by fisheye views. Specific interface feaures are required to reconcile the differences.
A Task-Based Methodology for Specifying Expert Systems A task-based specification methodology for expert system specification that is independent of the problem solving architecture, that can be applied to many expert system applications, that focuses on what the knowledge is, not how it is implemented, that introduces the major concepts involved gradually, and that supports verification and validation is discussed. To evaluate the methodology, a specification of R1/SOAR, an expert system that reimplements a major portion of the R1 expert system, was reverse engineered.<>
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.201797
0.068225
0.068177
0.067949
0.05122
0.010632
0
0
0
0
0
0
0
0
Sharp Retrenchment, Modulated Refinement and Simulation. Sharp retrenchment is introduced and briefly justified informally, as a liberalisation of refinement. In sharp retrenchment the relationship between an abstract operation and its concrete counterpart is mediated by extra predicates, allowing most particularly the description of non- refinement-like properties, and the mixing of I/O and state aspects in the passage between levels of abstraction. Sharp retrenchments are briefly contrasted with unsharp ones. Sharp retrenchments are shown to have a natural law of composition, and the way in which refinements may be viewed as sharp retrenchments is discussed. Modulated refinement is introduced as a version of refinement allowing mixing of I/O and state aspects, in order to facilitate comparison between sharp retrenchment and refinement, and various notions of simulation are considered in this context, specifically: stepwise simulation, the ability of simulator to mimic a sequence of execution steps of the simulatee; strong simulation, in which states and step labels are mapped independently between simulatee and simulator; and the refinement notion itself. Special cases of sharp retrenchment are shown to possess various subsets of these simulation properties, and the extent to which sharp retrenchments contain refinements within them is addressed. The details of the theory are worked out for the B-Method, though the applicability of the
Specware: Formal Support for Composing Software
Retrenchment: Extending the Reach of Refinement Discussion of a simple example demonstrates various expressive limitations of the refinement calculus, and suggests a liberalization of refinement, called retrenchment, which will support an analogous formal development calculus. Useful concrete system behavior can be specified outside the domain of pure refinement, and a case is made for fluidity between I/O and state components across the development step.A syntax and a formal definition are presented for retrenchment, which has some necessary properties for a formal development calculus: transitivity gives stepwise composition of retrenchments, and monotonicity w.r.t. the specification language constructors gives piecewise construction of retrenchments.
Retrenchment: An Engineering Variation on Refinement It is argued that refinement, in which I/O signatures stay the same, preconditions are weakened and postconditions strengthened, is too restrictive to describe all but a fraction of many realistic developments. An alternative notion is proposed called retrenchment, which allows information to migrate between I/O and state aspects of operations at different levels of abstraction, and which allows only a fraction of the high level behaviour to be captured at the low level. This permits more of the informal aspects of design to be formally captured and checked. The details are worked out for the B-Method.
Event Based Sequential Program Development: Application to Constructing a Pointer Program In this article, I present an "event approach" used to formally develop sequential programs. It is based on the formalism of Action Systems [6] (and Guarded Commands [7]), which is is interesting because it involves a large number of pointer manipulations.
The weakest prespecification
(INTER-)ACTION REFINEMENT: THE EASY WAY1 We outline and illustrate a formal concept for the specification and refinement of networks of interactive components. We describe systems by modular, functional specification techniques. We distinguish between black box and glass box views of interactive system components as well as refinements of their black box and glass box views. We identify and discuss several classes of refinements such as behaviour refinement, communication history refinement, interface interaction refinement, state space refinement, distribution refinement , and others. In particular, we demonstrate how these concepts of refinement and their verification are supported by functional specification techniques leading to a general formal refinement calculus. It can be used as the basis for a method for the development of distributed interactive systems.
Specifying software requirements for complex systems: new techniques and their application This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.
State-Based Model Checking of Event-Driven System Requirements It is demonstrated how model checking can be used to verify safety properties for event-driven systems. SCR tabular requirements describe required system behavior in a format that is intuitive, easy to read, and scalable to large systems (e.g. the software requirements for the A-7 military aircraft). Model checking of temporal logics has been established as a sound technique for verifying properties of hardware systems. An automated technique for formalizing the semiformal SCR requirements and for transforming the resultant formal specification onto a finite structure that a model checker can analyze has been developed. This technique was effective in uncovering violations of system invariants in both an automobile cruise control system and a water-level monitoring system.
Compact chart: a program logic notation with high describability and understandability This paper describes an improved flow chart notation, Compact Chart, developed because the flow chart conception is effective in constructing program logics, but the conventional notation for it is ineffective.By introducing the idea of separation of control transfer and process description, Compact Charting gives an improved method of representing and understanding program logics.
Conceptual Graphs and First-Order Logic . Conceptual Structures (CS) Theory is a logic-based knowledgerepresentation formalism. To show that conceptual graphs have thepower of first-order logic, it is necessary to have a mapping between bothformalisms. A proof system, i.e. axioms and inference rules, for conceptualgraphs is also useful. It must be sound (no false statement is derivedfrom a true one) and complete (all possible tautologies can be derivedfrom the axioms). This paper shows that Sowa's original definition of...
Hy+: a Hygraph-based query and visualization system
Matching pedagogical intent with engineering design process models for precollege education Public perception of engineering recognizes its importance to national and international competitiveness, economy, quality of life, security, and other fundamental areas of impact; but uncertainty about engineering among the general public remains. Federal funding trends for education underscore many of the concerns regarding teaching and learning in science, technology, engineering, and mathematics subjects in primary through grade 12 (P-12) education. Conflicting perspectives on the essential attributes that comprise the engineering design process results in a lack of coherent criteria against which teachers and administrators can measure the validity of a resource, or assess its strengths and weaknesses, or grasp incongruities among competing process models. The literature suggests two basic approaches for representing engineering design: a phase-based, life cycle-oriented approach; and an activity-based, cognitive approach. Although these approaches serve various teaching and functional goals in undergraduate and graduate engineering education, as well as in practice, they tend to exacerbate the gaps in P-12 engineering efforts, where appropriate learning objectives that connect meaningfully to engineering are poorly articulated or understood. In this article, we examine some fundamental problems that must be resolved if preengineering is to enter the P-12 curriculum with meaningful standards and is to be connected through learning outcomes, shared understanding of engineering design, and other vestiges to vertically link P-12 engineering with higher education and the practice of engineering. We also examine historical aspects, various pedagogies, and current issues pertaining to undergraduate and graduate engineering programs. As a case study, we hope to shed light on various kinds of interventions and outreach efforts to inform these efforts or at least provide some insight into major factors that shape and define the environment and cultures of the two institutions (including epistemic perspectives, institutional objectives, and political constraints) that are very different and can compromise collaborative efforts between the institutions of P-12 and higher education.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.122726
0.146667
0.099951
0.043114
0.026667
0.008316
0.000321
0.000077
0.000001
0
0
0
0
0
The S/Net's Linda kernel (extended abstract) No abstract available.
Distributed process groups in the V Kernel The V kernel supports an abstraction of processes, with operations for interprocess communication, process management, and memory management. This abstraction is used as a software base for constructing distributed systems. As a distributed kernel, the V kernel makes intermachine boundaries largely transparent.In this environment of many cooperating processes on different machines, there are many logical groups of processes. Examples include the group of tile servers, a group of processes executing a particular job, and a group of processes executing a distributed parallel computation.In this paper we describe the extension of the V kernel to support process groups. Operations on groups include group interprocess communication, which provides an application-level abstraction of network multicast. Aspects of the implementation and performance, and initial experience with applications are discussed.
S/NET: A High-Speed Interconnect for Multiple Computers This paper describes S/NET (symmetric network), a high-speed small area interconnect that supports effective multiprocessing using message-based communication. This interconnect provides low latency, bounded contention time, and high throughput. It further provides hardware support for low level flow control and signaling. The interconnect is a star network with an active switch. The computers connect to the switch through full duplex fiber links. The S/NET provides a simple memory addressable interface to the processors and appears as a logical bus interconnect. The switch provides fast, fair, and deterministic contention resolution. It further supports high priority signals to be sent unimpeded in presence of data traffic (this can viewed as equivalent to interrupts on a conventional memory bus). The initial implementation supports a mix of VAX computers and Motorola 68000 based single board computers up to a maximum of 12. The switch throughput is 80 Mbits/s and the fiber links operate at a data rate of 10 Mbits/s. The kernel-to-kernel latency is only100 mus. We present a description of the architecture and discuss the performance of current systems.
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Generative communication in Linda Generative communication is the basis of a new distributed programming langauge that is intended for systems programming in distributed settings generally and on integrated network computers in particular. It differs from previous interprocess communication models in specifying that messages be added in tuple-structured form to the computation environment, where they exist as named, independent entities until some process chooses to receive them. Generative communication results in a number of distinguishing properties in the new language, Linda, that is built around it. Linda is fully distributed in space and distributed in time; it allows distributed sharing, continuation passing, and structured naming. We discuss these properties and their implications, then give a series of examples. Linda presents novel implementation problems that we discuss in Part II. We are particularly concerned with implementation of the dynamic global name space that the generative communication model requires.
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
A bidirectional data driven Lisp engine for the direct execution of Lisp in parallel
STATEMATE: a working environment for the development of complex reactive systems This paper provides a brief overview of the STATE MATE system, constructed over the past three years by i-Logix - r - ., Inc., and Ad Cad Ltd. STATEMATE is a graphical working en- vironment, intended for the specification, analysis, design and documentation of large and complex reactive systems, such as real-time embedded systems, control and communication sys- tems, and interactive software. It enables a user to prepare, analyze and debug diagrammatic, yet precise, descriptions of the system under development from three inter-related points of view, capturing, structure, functionality and behavior. These views are represented by three graphical languages, the most intricate of which is the language of statecharts used to depict reactive behavior over time. In addition to the use of state- charts, the main novelty of STATEMATE is in the fact that it 'understands' the entire descriptions perfectly, to the point of being able to analyze them for crucial dynamic properties, to carry out rigorous animated executions and simulations of the described system, and to create running code automatically. These features are invaluable when it comes to the quality and reliability of the final outcome.
Requirements-Based Testing of Real-Time Systems. Modeling for Testability First Page of the Article
A practical approach to combining requirements definition and object-oriented analysis According to our experience in real&dash;world projects, we still observe deficiencies of current methods for object&dash;oriented analysis (OOA), especially in respect to the early elicitation and definition of requirements. Therefore, we used object&dash;oriented technology and hypertext to develop a practical approach – with tool support – that tightly combines OOA with requirements definition. This novel approach is compatible with virtually any OOA method. While more work needs to be done especially for supporting the process of requirements definition, the observed deficiencies and current limitations of existing OOA methods are addressed and partly removed through this combination. We have applied our approach in real&dash;world projects, and our experience suggests the usefulness of this approach. Essentially, its use leads to a more complete and structured definition of the requirements, and consequently we derive some recommendations for practitioners.
A co-operative scenario based approach to acquisition and validation of system requirements: How exceptions can help! Scenarios, in most situations, are descriptions of required interactions between a desired system and its environment, which detail normative system behaviour. Our studies of current scenario use in requirements engineering have revealed that there is considerable interest in the use of scenarios for acquisition, elaboration and validation of system requirements. However, scenarios have seldom bee...
A Case Study in Transformational Design of Concurrent Systems . We explain a transformationalapproach to the design and verification ofcommunicating concurrent systems. Thetransformations start form specifications thatcombine trace-based with state-based assertionalreasoning about the desired communicationbehaviour, and yield concurrent implementations.We illustrate our approach by acase study proving correctness of implementationsof safe and regular registers allowingconcurrent writing and reading phases, originallydue to Lamport.1...
Feature based classification of computer graphics and real images Photorealistic images can now be created using advanced techniques in computer graphics (CG). Synthesized elements could easily be mistaken for photographic (real) images. Therefore we need to differentiate between CG and real images. In our work, we propose and develop a new framework based on an aggregate of existing features. Our framework has a classification accuracy of 90% when tested on the de facto standard Columbia dataset, which is 4% better than the best results obtained by other prominent methods in this area. We further show that using feature selection it is possible to reduce the feature dimension of our framework from 557 to 80 without a significant loss in performance (≪ 1%). We also investigate different approaches that attackers can use to fool the classification system, including creation of hybrid images and histogram manipulations. We then propose and develop filters to effectively detect such attacks, thereby limiting the effect of such attacks to our classification system.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.06727
0.019309
0.017425
0.008706
0.006241
0.0004
0.000185
0.000023
0.000001
0
0
0
0
0
Ensuring the quality of conceptual representations High quality data and process representations are critical to the success of system development efforts. Despite this importance, quantitative methods for evaluating the quality of a representation are virtually nonexistent. This is a major shortcoming. However, there is another approach. Instead of evaluating the quality of the final representation, the representation process itself can be evaluated. This paper views the modeling process as a communication channel. In a good communication channel, sufficient error prevention, error detection, and error correction mechanisms exist to ensure that the output message matches the input message. A good modeling process will also have mechanisms for preventing, detecting, and correcting errors at each step from observation to elicitation to analysis to final representation. This paper describes a theoretically-based set of best practices for ensuring that each step of the process is performed correctly, followed by a proof of concept experiment demonstrating the utility of the method for producing a representation that closely reflects the real world.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
CREWS-SAVRE: Scenarios for Acquiring and Validating Requirements This paper reports research into semi-automatic generationof scenarios for validating software-intensive system requirements.The research was undertaken as part of the ESPRIT IV 21903 ‘CREWS’long-term research project. The paper presents the underlyingtheoretical models of domain knowledge, computational mechanisms anduser-driven dialogues needed for scenario generation. It describeshow CREWS draws on theoretical results from the ESPRIT III 6353‘NATURE’ basic research action, that is object system models whichare abstractions of the fundamental features of different categoriesof problem domain. CREWS uses these models to generate normal coursescenarios, then draws on theoretical and empirical research fromcognitive science, human-computer interaction, collaborative systemsand software engineering to generate alternative courses for thesescenarios. The paper describes a computational mechanism for derivinguse cases from object system models, simple rules to link actions ina use case, taxonomies of classes of exceptions which give rise toalternative courses in scenarios, and a computational mechanism forgeneration of multiple scenarios from a use case specification.
A co-operative scenario based approach to acquisition and validation of system requirements: How exceptions can help! Scenarios, in most situations, are descriptions of required interactions between a desired system and its environment, which detail normative system behaviour. Our studies of current scenario use in requirements engineering have revealed that there is considerable interest in the use of scenarios for acquisition, elaboration and validation of system requirements. However, scenarios have seldom bee...
Distributed Intelligent Agents In Retsina, the authors have developed a distributed collection of software agents that cooperate asynchronously to perform goal-directed information retrieval and integration for supporting a variety of decision-making tasks. Examples for everyday organizational decision making and financial portfolio management demonstrate its effectiveness.
Generating, integrating, and activating thesauri for concept-based document retrieval A blackboard-based document management system that uses a neural network spreading-activation algorithm which lets users traverse multiple thesauri is discussed. Guided by heuristics, the algorithm activates related terms in the thesauri and converges of the most pertinent concepts. The system provides two control modes: a browsing module and an activation module that determine the sequence of operations. With the browsing module, users have full control over which knowledge sources to browse and what terms to select. The system's query formation; the retrieving, ranking and selection of documents; and thesaurus activation are described.<>
Document ranking and the vector-space model Efficient and effective text retrieval techniques are critical in managing the increasing amount of textual information available in electronic form. Yet text retrieval is a daunting task because it is difficult to extract the semantics of natural language texts. Many problems must be resolved before natural language processing techniques can be effectively applied to a large collection of texts. Most existing text retrieval techniques rely on indexing keywords. Unfortunately, keywords or index terms alone cannot adequately capture the document contents, resulting in poor retrieval performance. Yet keyword indexing is widely used in commercial systems because it is still the most viable way by far to process large amounts of text. Using several simplifications of the vector-space model for text retrieval queries, the authors seek the optimal balance between processing efficiency and retrieval effectiveness as expressed in relevant document rankings
Tolerant planning and negotiation in generating coordinated movement plans in an automated factory Plan robustness is important for real world applications where modelling imperfections often result in execution deviations. The concept of tolerant planning is suggested as one of the ways to build robust plans. Tolerant planning achieves this aim by being tolerant of an agent's own execution deviations. When applied to multi-agent domains, it has the additional characteristic of being tolerant of other agents' deviant behaviour. Tolerant planning thus defers dynamic replanning until execution errors become excessive. The underlying strategy is to provide more than ample resources for agents to achieve their goals. Such redundancies aggravate the resource contention problem. To counter this, the iterative negotiation mechanism is suggested. It requires agents to be skillful in negotiating with other agents to resolve conflicts in such a way as to minimize compromising one's own tolerances and yet being benevolent in helping others find a feasible plan.
Issues in automated negotiation and electronic commerce: extending the contract net framework In this paper we discuss a number of previously unaddressed issues that arise in automated ne- got/ation among self-interested agents whose rationality is bounded by computational com- plexity. These issues are presented in the con- text of iterative task allocation negotiations. First, the reasons why such agents need to be able to choose the stage and level of com- mitment dynamically are identified. A pro- tocol that allows such choices through condi- tional commitment breaking penalties is pre- sented. Next, the implications of bounded ra- tionality are analysed. Several tradeoffs be- tween allocated computation and negotiation benefits and risk are enumerated, and the ne- cessity of explicit local deliberation control is substantiated. Techniques for linking negoti- ation items and multiagent contracts are pre- sented as methods for escaping local optima in the task allocation process. Implementing both methods among self-interested bounded ratio- nal agents is discussed. Finally, the problem of message congestion among self-interested agents is described, and alternative remedies are presented.
Design and Development Assessment An assessment methodology is described and illustrated. This methodology separates assessment into the following phases (1) Elicitation of requirements; (2) Elicitation of failure modes and their impact (risk of loss of requirements); (3) Elicitation of failure mode mitigations and their effectiveness (degree of reduction of failure modes); (4) Calculation of outstanding risk considering the mitigations. This methodology, with accompanying tool support, has been applied to assist in planning the engineering development of advanced technologies. Design assessment featured prominently in these applications. The overall approach is also applicable to development assessment (of the development process to be followed to implement the design). Both design and development assessments are demonstrated on hypothetical scenarios based on the workshop's TRMCS case study. TRMCS information has been entered into the assessment support tool, and serves as illustration throughout.
Elements of style: analyzing a software design feature with a counterexample detector We illustrate the application of Nitpick, a specification checker, to the design of a style mechanism for a word processor. The design is cast, along with some expected properties, in a subset of Z. Nitpick checks a property by enumerating all possible cases within some finite bounds, displaying as a counterexample the first case for which the property fails to hold. Unlike animation or execution tools, Nitpick does not require state transitions to be expressed constructively, and unlike theorem provers, operates completely automatically without user intervention. Using a variety of reduction mechanisms, it can cover an enormous number of cases in a reasonable time, so that subtle flaws can be rapidly detected.
A logic of action for supporting goal-oriented elaborations of requirements Constructing requirements specifications for a complex system is a quite difficult process. In this paper, we have focussed on the elaboration part of this process whete new requirements are progressively identified and incorporated in the requirements document. We propose a requirements specification language which, beyond the mere expression of requirements, also supports the elaboration step. This language is a Gist’s dialect where the concepts of goals and the one of agent characterized by some responsibility are identified. A formaliiation of this requirements language is proposed in terms of a non standard modal logic of actions.
Biting the silver bullet: toward a brighter future for system development The author responds to two discouraging position papers by F.B. Brooks, Jr. (see ibid., vol.20, no.4, p 10-19, 1987) and D.L. Parnas (see Commun. ACM, vol.28, no.12, p.1326-35, 1985) regarding the potential of software engineering. While agreeing with most of the specific points made in both papers, he illuminates the brighter side of the coin, emphasizing developments in the field that were too recent or immature to have influenced Brooks and Parnas when they wrote their manuscripts. He reviews their arguments, and then considers a class of systems that has been termed reactive, which are widely considered to be particularly problematic. He reviews a number of developments that have taken place in the past several years and submits that they combine to form the kernel of a solid general-purpose approach to the development of complex reactive systems.<>
Linear hybrid action systems Action Systems is a predicate transformer based formalism. It supports the development of provably correct reactive and distributed systems by refinement. Recently, Action Systems were extended with a differential action. It is used for modelling continuous behaviour, thus, allowing the use of refinement in the development of provably correct hybrid systems, i.e, a discrete controller interacting with some continuously evolving environment. However, refinement as a method is concerned with correctness issues only. It offers very little guidance in what details one should consider during the refinement steps to make the system more robust. That information is revealed by robustness analysis. Other formalisms not supporting refinement do have tool support for automating the robustness analysis, e.g., HyTech for linear hybrid automata. Consequently, we study in this paper the non-trivial translation problem between Action Systems and linear hybrid automata. As the main contribution, we give and prove correct an algorithm that translates a linear hybrid action system to a linear hybrid automaton. With this algorithm we combine the strengths of the two formalisms: we may use HyTech for the robustness analysis to guide the development by refinement.
Large project experiences with object-oriented methods and reuse
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.059875
0.04244
0.04014
0.04014
0.04014
0.04014
0.020338
0.01338
0.005068
0.000209
0.000001
0
0
0
Performance benchmarking of deep learning framework on Intel Xeon Phi With the success of deep learning (DL) methods in diverse application domains, several deep learning software frameworks have been proposed to facilitate the usage of these methods. By knowing the frameworks which are employed in big data analysis, the analysis process will be more efficient in terms of time and accuracy. Thus, benchmarking DL software frameworks is in high demand. This paper presents a comparative study of deep learning frameworks, namely Caffe and TensorFlow on performance metrics: runtime performance and accuracy. This study is performed with several datasets, such as LeNet MNIST classification model, CIFAR-10 image recognition datasets and message passing interface (MPI) parallel matrix-vector multiplication. We evaluate the performance of the above frameworks when employed on machines of Intel Xeon Phi 7210. In this study, the use of vectorization, OpenMP parallel processing, and MPI are examined to improve the performance of deep learning frameworks. The experimental results show the accuracy comparison between the number of iterations of the test in the training model and the training time on the different machines before and after optimization. In addition, an experiment on two multi-nodes of Xeon Phi is performed. The experimental results also show the optimization of Xeon Phi is beneficial to the Caffe and TensorFlow frameworks.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Low line memory visually lossless compression for color images using non-uniform quantizers The paper proposes a novel method to compress color images with imperceptible quality loss. The algorithm explores the difference in error perceptibility of human visual system (HVS) for various areas. It is done by implementing different non-uniform quantizers for flat, detail and random blocks of pixels. These blocks are classified based on principle component analysis (PCA) and prediction error. For hardware implementation purpose, the algorithm is designed to use very low line memory. Simulation results show that the proposed compression is visually lossless in all categories of tested images with high compression ratio.
The LOCO-I lossless image compression algorithm: principles and standardization into JPEG-LS LOCO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. It is conceived as a “low complexity projection” of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm “enjoys the best of both worlds.” It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golomb (1966) type codes, which are adaptively chosen, and an embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCO-I, and its standardization into JPEC-LS
On Overview of KRL, a Knowledge Representation Language
Formal Derivation of Strongly Correct Concurrent Programs. Summary  A method is described for deriving concurrent programs which are consistent with the problem specifications and free from deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary variables is also given. The applicability of the techniques presented is discussed through various examples; their use for verification purposes is illustrated as well.
Simulation of hepatological models: a study in visual interactive exploration of scientific problems In many different fields of science and technology, visual expressions formed by diagrams, sketches, plots and even images are traditionally used to communicate not only data but also procedures. When these visual expressions are systematically used within a scientific community, bi-dimensional notations often develop which allow the construction of complex messages from sets of primitive icons. This paper discusses how these notations can be translated into visual languages and organized into an interactive environment designed to improve the user's ability to explore scientific problems. To facilitate this translation, the use of Conditional Attributed Rewriting Systems has been extended to visual language definition. The case of a visual language in the programming of a simulation of populations of hepatic cells is studied. A discussion is given of how such a visual language allows the construction of programs through the combination of graphical symbols which are familiar to the physician or which schematize shapes familiar to him in that they resemble structures the observes in real experiments. It is also shown how such a visual approach allows the user to focus on the solution of his problems, avoiding any request for unnecessary precision and most requests for house-keeping data during the interaction.
Object-oriented modeling and design
Reasoning Algebraically about Loops We show here how to formalize different kinds of loop constructs within the refinement calculus, and how to use this formalization to derive general loop transformation rules. The emphasis is on using algebraic methods for reasoning about equivalence and refinement of loops, rather than looking at operational ways of reasoning about loops in terms of their execution sequences. We apply the algebraic reasoning techniques to derive a collection of different loop transformation rules that have been found important in practical program derivations: merging and reordering of loops, data refinement of loops with stuttering transitions and atomicity refinement of loops.
Separation and information hiding We investigate proof rules for information hiding, using the recent formalism of separation logic. In essence, we use the separating conjunction to partition the internal resources of a module from those accessed by the module's clients. The use of a logical connective gives rise to a form of dynamic partitioning, where we track the transfer of ownership of portions of heap storage between program components. It also enables us to enforce separation in the presence of mutable data structures with embedded addresses that may be aliased.
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.000329
0
0
0
0
0
0
0
0
0
0
0
0
Using statecharts to model hypertext
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Text Categorization with Suport Vector Machines: Learning with Many Relevant Features This paper explores the use of Support Vector Machines (SVMs) for learning text classifiers from examples. It analyzes the particular properties of learning with text data and identifies why SVMs are appropriate for this task. Empirical results support the theoretical findings. SVMs achieve substantial improvements over the currently best performing methods and behave robustly over a variety of different learning tasks. Furthermore, they are fully automatic, eliminating the need for manual...
Automatic identification of personal insults on social news sites As online communities grow and the volume of user-generated content increases, the need for community management also rises. Community management has three main purposes: to create a positive experience for existing participants, to promote appropriate, socionormative behaviors, and to encourage potential participants to make contributions. Research indicates that the quality of content a potential participant sees on a site is highly influential; off-topic, negative comments with malicious intent are a particularly strong boundary to participation or set the tone for encouraging similar contributions. A problem for community managers, therefore, is the detection and elimination of such undesirable content. As a community grows, this undertaking becomes more daunting. Can an automated system aid community managers in this task? In this paper, we address this question through a machine learning approach to automatic detection of inappropriate negative user contributions. Our training corpus is a set of comments from a news commenting site that we tasked Amazon Mechanical Turk workers with labeling. Each comment is labeled for the presence of profanity, insults, and the object of the insults. Support vector machines trained on these data are combined with relevance and valence analysis systems in a multistep approach to the detection of inappropriate negative user contributions. The system shows great potential for semiautomated community management. © 2012 Wiley Periodicals, Inc.
Chunking with support vector machines We apply Support Vector Machines (SVMs) to identify English base phrases (chunks). SVMs are known to achieve high generalization performance even with input data of high dimensional feature spaces. Furthermore, by the Kernel principle, SVMs can carry out training with smaller computational overhead independent of their dimensionality. We apply weighted voting of 8 SVMs-based systems trained with distinct chunk representations. Experimental results show that our approach achieves higher accuracy than previous approaches.
Rule writing or annotation: cost-efficient resource usage for base noun phrase chunking This paper presents a comprehensive empirical comparison between two approaches for developing a base noun phrase chunker: human rule writing and active learning using interactive real-time human annotation. Several novel variations on active learning are investigated, and underlying cost models for cross-modal machine learning comparison are presented and explored. Results show that it is more efficient and more successful by several measures to train a system using active learning annotation rather than hand-crafted rule writing at a comparable level of human labor investment.
Scikit-learn: Machine Learning in Python Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net.
A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
From Tweets to Polls: Linking Text Sentiment to Public Opinion Time Series We connect measures of public opinion measured from polls with sentiment measured from text. We analyze several surveys on consumer confidence and political opinion over the 2008 to 2009 period, and find they correlate to sentiment word frequencies in contempora- neous Twitter messages. While our results vary across datasets, in several cases the correlations are as high as 80%, and capture important large-scale trends. The re- sults highlight the potential of text streams as a substi- tute and supplement for traditional polling.
Joint sentiment/topic model for sentiment analysis Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA), called joint sentiment/topic model (JST), which detects sentiment and topic simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed JST model is fully unsupervised. The model has been evaluated on the movie review dataset to classify the review sentiment polarity and minimum prior information have also been explored to further improve the sentiment classification accuracy. Preliminary experiments have shown promising results achieved by JST.
On Formalism in Specifications A critique of a natural-language specification, followed by presentation of a mathematical alternative, demonstrates the weakness of natural language and the strength of formalism in requirements specifications.
Proving Liveness Properties of Concurrent Programs
A General Scheme for Breadth-First Graph Traversal . We survey an algebra of formal languages suitable to dealwith graph algorithms. As an example of its use we derive a generalscheme for breadth--first graph traversal. This general scheme is thenapplied to a reachability and a shortest path problem.1 IntroductionIn books about algorithmic graph theory algorithms are usually presented withoutformal specification and formal development. Some approaches, in contrast,provide a more precise treatment of graph algorithms, resulting in...
Linda in adolescence
Measuring process flexibility and agility In their attempt to improve their systems and architectures, organizations need to be aware of the types of flexibility and agility and the current level of each type of flexibility and agility. Flexibility is the general ability to react to changes, whilst agility is the speed in responding to variety and changes Both flexibility and agility are diverse concepts that are hard to grasp. In this paper the types of flexibility and agility of business processes is discussed on a foundation level and an approach to measure the level of flexibility and agility is proposed. A case study of the flexibility and agility measurement is used to demonstrate the approach. The illustration is used to discuss the difficulties and limitations of the measurement approach. There is no uniform definition of or view on flexibility and agility. This makes it hard to develop a measurement approach. Furthermore, as business processes can be different, this might result in different metrics for measuring the level of flexibility and agility. There is no single measure and for each type of business process and flexibility and agility should always measure by a combination of metrics. In addition, both qualitative and quantitative metrics should be used to measure the level of flexibility and agility.
Cognitive Relaying With Transceiver Hardware Impairments Under Interference Constraints. In this letter, we analyze the performance of cognitive amplify-and-forward multirelay networks with active direct link in the presence of relay transceiver hardware impairments. Considering distortion noises on both interference and main data links, we derive tight closed-form outage probability expressions and their asymptotic behavior for partial relay selection (PRS) and opportunistic relay se...
1.039792
0.05
0.026559
0.02547
0.025
0.008333
0.000277
0.000065
0
0
0
0
0
0
Model-Based Specification of Virtual Interaction Environments This paper discusses a model-based approach to the design of complex interaction environments like virtual worlds, mixed and augmented reality. The environment a user interacts with is seen as a virtual environment populated by virtual entities, created and maintained active by a program interpreted by the computer, which can be described by specifying the behavior of the population. The specification of the behavior occurs along three dimensions: 1) programming languages to specify system computations; 2) user activity languages to specify user activities; 3) perceptual languages to deal with the physical characteristics of the messages from the machine to the user. These dimensions define an interaction modeling space which constitutes the frame in which the virtual environment is specified.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Parallel programming with coordination structures
Parallel Programming in Linda
Conception, evolution, and application of functional programming languages The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda1 and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Queue-based multi-processing LISP As the need for high-speed computers increases, the need for multi-processors will be become more apparent. One of the major stumbling blocks to the development of useful multi-processors has been the lack of a good multi-processing language—one which is both powerful and understandable to programmers. Among the most compute-intensive programs are artificial intelligence (AI) programs, and researchers hope that the potential degree of parallelism in AI programs is higher than in many other applications. In this paper we propose multi-processing extensions to Lisp. Unlike other proposed multi-processing Lisps, this one provides only a few very powerful and intuitive primitives rather than a number of parallel variants of familiar constructs.
The architecture of a Linda coprocessor We describe the architecture of a coprocessor that supports the communication primitives of the Linda parallel programming environment in hardware. The coprocessor is a critical element in the architecture of the Linda Machine, an MIMD parallel processing system that is designed top down from the specifications of Linda. Communication in Linda programs takes place through a logically shared associative memory mechanism called tuple space. The Linda Machine, however, has no physically shared memory. The microprogrammable coprocessor implements distributed protocols for executing tuple space operations over the Linda Machine communication network. The coprocessor has been designed and is in the process of fabrication. We discuss the projected performance of the coprocessor and compare it with software Linda implementations. This work is supported in part by National Science Foundation grants CCR-8657615 and ONR N00014-86-K-0310.
Formal Derivation of Strongly Correct Concurrent Programs. Summary  A method is described for deriving concurrent programs which are consistent with the problem specifications and free from deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary variables is also given. The applicability of the techniques presented is discussed through various examples; their use for verification purposes is illustrated as well.
An image multiresolution representation for lossless and lossy compression We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bit-shift operations. During its calculation, the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropy-coding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for progressive transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate versus distortion performance is comparable to those of the most efficient lossy compression methods.
Supporting systems development by capturing deliberations during requirements engineering Support for various stakeholders involved in software projects (designers, maintenance personnel, project managers and executives, end users) can be provided by capturing the history about design decisions in the early stages of the system's development life cycle in a structured manner. Much of this knowledge, which is called the process knowledge, involving the deliberation on alternative requirements and design decisions, is lost in the course of designing and changing such systems. Using an empirical study of problem-solving behavior of individual and groups of information systems professionals, a conceptual model called REMAP (representation and maintenance of process knowledge) that relates process knowledge to the objects that are created during the requirements engineering process has been developed. A prototype environment that provides assistance to the various stakeholders involved in the design and management of large systems has been implemented.
Algorithms for drawing graphs: an annotated bibliography Several data presentation problems involve drawing graphs so that they are easy to read and understand. Examples include circuit schematics and software engineering diagrams. In this paper we present a bibliographic survey on algorithms whose goal is to produce aesthetically pleasing drawings of graphs. Research on this topic is spread over the broad spectrum of Computer Science. This bibliography constitutes an attempt to encompass both theoretical and application oriented papers from disparate areas.
Automatic construction of networks of concepts characterizing document databases Two East-bloc computing knowledge bases, both based on a semantic network structure, were created automatically from large, operational textual databases using two statistical algorithms. The knowledge bases were evaluated in detail in a concept-association experiment based on recall and recognition tests. In the experiment, one of the knowledge bases, which exhibited the asymmetric link property, outperformed four experts in recalling relevant concepts in East-bloc computing. The knowledge base, which contained 20000 concepts (nodes) and 280000 weighted relationships (links), was incorporated as a thesaurus-like component in an intelligent retrieval system. The system allowed users to perform semantics-based information management and information retrieval via interactive, conceptual relevance feedback
DOODLE: a visual language for object-oriented databases In this paper we introduce DOODLE, a new visual and declarative language for object-oriented databases. The main principle behind the language is that it is possible to display and query the database with arbitrary pictures. We allow the user to tailor the display of the data to suit the application at hand or her preferences. We want the user-defined visualizations to be stored in the database, and the language to express all kinds of visual manipulations. For extendibility reasons, the language is object-oriented. The semantics of the language is given by a well-known deductive query language for object-oriented databases. We hope that the formal basis of our language will contribute to the theoretical study of database visualizations and visual query languages, a subject that we believe is of great interest, but largely left unexplored.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2496
0.041664
0.0416
0.03127
0.000122
0.000071
0
0
0
0
0
0
0
0
Identifying and classifying ambiguity for regulatory requirements Software engineers build software systems in increasingly regulated environments, and must therefore ensure that software requirements accurately represent obligations described in laws and regulations. Prior research has shown that graduate-level software engineering students are not able to reliably determine whether software requirements meet or exceed their legal obligations and that professional software engineers are unable to accurately classify cross-references in legal texts. However, no research has determined whether software engineers are able to identify and classify important ambiguities in laws and regulations. Ambiguities in legal texts can make the difference between requirements compliance and non-compliance. Herein, we develop a ambiguity taxonomy based on software engineering, legal, and linguistic understandings of ambiguity. We examine how 17 technologists and policy analysts in a graduate-level course use this taxonomy to identify ambiguity in a legal text. We also examine the types of ambiguities they found and whether they believe those ambiguities should prevent software engineers from implementing software that complies with the legal text. Our research suggests that ambiguity is prevalent in legal texts. In 50 minutes of examination, participants in our case study identified on average 33.47 ambiguities in 104 lines of legal text using our ambiguity taxonomy as a guideline. Our analysis suggests (a) that participants used the taxonomy as intended: as a guide and (b) that the taxonomy provides adequate coverage (97.5%) of the ambiguities found in the legal text.
FOL-Based Approach for Improving Legal-GRL Modeling Framework: A Case for Requirements Engineering of Legal Regulations of Social Media Requirements engineers need to have a comprehensive requirements modeling framework for modeling legal requirements, particularly for privacy-related regulations, which are required for IT systems. The nature of law demands a special approach for dealing with the complexity of regulations. In this paper, we integrate different approaches for modeling legal requirements into one unified framework. We use semantic parameterization technique and first-order logic (FOL) approach for extracting legal requirements from legal documents. We then use Goal-oriented Requirements Language (GRL) to illustrate and evaluate the models. The aim of this paper is to improve and extend the existing Legal-GRL framework using semantic parameterization process and FOL. We use social media as the example to illustrate our approach.
Legal goal-oriented requirement language (legal GRL) for modeling regulations Every year, governments introduce new or revised regulations that are imposing new types of requirements on software development. Analyzing and modeling these legal requirements is time consuming, challenging and cumbersome for software and requirements engineers. Having regulation models can help understand regulations and converge toward better compliance levels for software and systems. This paper introduces a systematic method to extract legal requirements from regulations by mapping the latter to the Legal Profile for Goal-oriented Requirements Language (GRL) (Legal GRL). This profile provides a conceptual meta-model for the anatomy of regulations and maps its elements to standard GRL with specialized annotations and links, with analysis techniques that exploit this additional information. The paper also illustrates examples of Legal GRL models for The Privacy and Electronic Communications Regulations. Existing tool support (jUCMNav) is also extended to support Legal GRL modeling.
Automated text mining for requirements analysis of policy documents Businesses and organizations in jurisdictions around the world are required by law to provide their customers and users with information about their business practices in the form of policy documents. Requirements engineers analyze these documents as sources of requirements, but this analysis is a time-consuming and mostly manual process. Moreover, policy documents contain legalese and present readability challenges to requirements engineers seeking to analyze them. In this paper, we perform a large-scale analysis of 2,061 policy documents, including policy documents from the Google Top 1000 most visited websites and the Fortune 500 companies, for three purposes: (1) to assess the readability of these policy documents for requirements engineers; (2) to determine if automated text mining can indicate whether a policy document contains requirements expressed as either privacy protections or vulnerabilities; and (3) to establish the generalizability of prior work in the identification of privacy protections and vulnerabilities from privacy policies to other policy documents. Our results suggest that this requirements analysis technique, developed on a small set of policy documents in two domains, may generalize to other domains.
ANTLR: a predicated-LL(k) parser generator this paper, we introduce the ANTLR (ANother Tool for Language Recognition) parsergenerator, which addresses all these issues. ANTLR is a component of the Purdue CompilerConstruction Tool Set (PCCTS)
Security and Privacy Requirements Analysis within a Social Setting Security issues for software systems ultimately concern relationships among social actors - stakeholders, system users, potential attackers - and the software acting on their behalf. This paper proposes a methodological framework for dealing with security and privacy requirements based on i*, an agent-oriented requirements modeling language. The framework supports a set of analysis techniques. In particular, attacker analysis helps identify potential system abusers and their malicious intents. Dependency vulnerability analysis helps detect vulnerabilities in terms of organizational relationships amongstakeholders. Countermeasure analysis supports the dynamic decision-making process of defensive system players in addressing vulnerabilities and threats. Finally, access control analysis bridges the gap between security requirement models and security implementation models. The framework is illustrated with an example involving security and privacy concerns in the design of agent-based health information systems. In addition, we discuss model evaluation techniques, including qualitative goal model analysis and property verification techniques based on model checking.
The mystery of the tower revealed: a non-reflective description of the reflective tower Abstract In an important series of papers [8, 9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge,reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understandingof programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not em- ploy reflection to explain reflection. 1. Modeling reflection
Toward reference models for requirements traceability Requirements traceability is intended to ensure continued alignment between stakeholder requirements and various outputs of the system development process. To be useful, traces must be organized according to some modeling framework. Indeed, several such frameworks have been proposed, mostly based on theoretical considerations or analysis of other literature. This paper, in contrast, follows an empirical approach. Focus groups and interviews conducted in 26 major software development organizations demonstrate a wide range of traceability practices with distinct low-end and high-end users of traceability. From these observations, reference models comprising the most important kinds of traceability links for various development tasks have been synthesized. The resulting models have been validated in case studies and are incorporated in a number of traceability tools. A detailed case study on the use of the models is presented. Four kinds of traceability link types are identified and critical issues that must be resolved for implementing each type and potential solutions are discussed. Implications for the design of next-generation traceability methods and tools are discussed and illustrated.
Efficient Estimation of Word Representations in Vector Space We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.
AToM3: A Tool for Multi-formalism and Meta-modelling This article introduces the combined use of multiformalism modelling and meta-modelling to facilitate computer assisted modelling of complex systems. The approach allows one to model different parts of a system using different formalisms. Models can be automatically converted between formalisms thanks to information found in a Formalism Transformation Graph (FTG), proposed by the authors. To aid in the automatic generation of multi-formalism modelling tools, formalisms are modelled in their own right (at a meta-level) within an appropriate formalism. This has been implemented in the interactive tool AToM3. This tool is used to describe formalisms commonly used in the simulation of dynamical systems, as well as to generate custom tools to process (create, edit, transform, simulate, optimise, ...) models expressed in the corresponding formalism. AToM3 relies on graph rewriting techniques and graph grammars to perform the transformations between formalisms as well as for other tasks, such as code generation and operational semantics specification.
Reasoning about Action Systems using the B-Method The action system formalism has been succesfully used whenconstructing parallel and distributed systems in a stepwise mannerwithin the refinement calculus. Usually the derivation is carried outmanually. In order to be able to produce more trustworthy software,some mechanical tool is needed. In this paper we show how actionsystems can be derived and refined within the B-Toolkit, which is amechanical tool supporting a software development method, theB-Method. We describe how action systems are embedded in theB-Method. Furthermore, we show how a typical and nontrivialrefinement rule, the superposition refinement rule, is formalized andapplied on action systems within the B-Method. In addition toproviding tool support for action system refinement we also extendthe application area of the B-Method to cover parallel anddistributed systems. A derivation towards a distributed loadbalancing algorithm is given as a case study.
Involutions On Relational Program Calculi The standard Galois connection between the relational and predicate-transformer models of sequential programming (defined in terms of weakest precondition) confers a certain similarity between them. This paper investigates the extent to which the important involution on transformers (which, for instance, interchanges demonic and angelic nondeterminism, and reduces the two kinds of simulation in the relational model to one kind in the transformer model) carries over to relations. It is shown that no exact analogue exists; that the two complement-based involutions are too weak to be of much use; but that the translation to relations of transformer involution under the Galois connection is just strong enough to support Boolean-algebra style reasoning, a claim that is substantiated by proving properties of deterministic computations. Throughout, the setting is that of the guarded-command language augmented by the usual specification commands; and where possible algebraic reasoning is used in place of the more conventional semantic reasoning.
Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.11
0.1
0.073333
0.005
0.000111
0.000026
0
0
0
0
0
0
0
0
Deriving programs by combining and adapting refinement scripts Although program refinement is usually presented as a top-down process, real programs are usually constructed by extending, adapting and combining existing programs. We show how this kind of program development can be performed within the refinement calculus using editable refinement scripts, which can be extended, adapted and combined in this way. Our approach is illustrated by a sequence of examples, beginning with a list insertion algorithm and culminating in a stable sorting algorithm
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Modeling and parallel evaluation of non-functional requirements using FRORL requirements language Current requirements specifications tend to focus more on the functional aspect than the nonfunctional side of a product. Many techniques and tools have been developed to specify and evaluate functional requirements, but few of them are equipped to deal with nonfunctional requirements. In this paper we extend the formal requirements specification language, FRORL, to model nonfunctional requirements and show how these nonfunctional requirements are related to the functional requirements. We also introduce a parallel evaluation technique to evaluate the functional requirements model by satisfying the nonfunctional requirements associated to it. By examining the result from the functional model, we can see how the nonfunctional requirements are satisfied, so that we can adjust and modify the nonfunctional requirements
Issues in the Development of Large, Distributed, and Reliable Software
Tools for specifying real-time systems Tools for formally specifying software for real-time systems have strongly improved their capabilities in recent years. At present, tools have the potential for improving software quality as well as engineers' productivity. Many tools have grown out of languages and methodologies proposed in the early 1970s. In this paper, the evolution and the state of the art of tools for real-time software specification is reported, by analyzing their development over the last 20 years. Specification techniques are classified as operational, descriptive or dual if they have both operational and descriptive capabilities. For each technique reviewed three different aspects are analyzed, that is, power of formalism, tool completeness, and low-level characteristics. The analysis is carried out in a comparative manner; a synthetic comparison is presented in the final discussion where the trend of technology improvement is also analyzed.
Timing requirements for time-driven systems using augmented Petri Nets A methodology for the statement of timing requirements is presented for a class of embedded computer systems. The notion of a "time-driven" system is introduced which is formalized using a Petri net model augmented with timing information. Several subclasses of time-driven systems are defined with increasing levels of complexity. By deriving the conditions under which the Petri net model can be proven to be safe in the presence of time, timing requirements for modules in the system can be obtained. Analytical techniques are developed for proving safeness in the presence of time for the net constructions used in the defined subclasses of time-driven systems.
Real-time constraints in a rapid prototyping language This paper presents real-time constraints of a prototyping language and some mechanisms for handling these constraints in rapidly prototyping embedded systems. Rapid prototyping of embedded systems can be accomplished using a Computer Aided Prototyping System (CAPS) and its associated Prototyping Language (PSDL) to aid the designer in handling hard real-time constraints. The language models time critical operations with maximum execution times, maximum response times and minimum periods. The mechanisms for expressing timing constraints in PSDL are described along with their meanings relative to a series of hardware models which include multi-processor configurations. We also describe a language construct for specifying the policies governing real-time behavior under overload conditions.
Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers.
Behavioural Conflicts in a Causal Specification Inconsistencies may arise in the course of specification of systems, and it is now recognised that they cannot be forbidden. Recent work has concentrated on enabling requirements descriptions to tolerate inconsistency and on proposing notations that permit inconsistency in specifications. We approach the subject by examining the use of an existing causal language, which is used as a means of specifying the behaviour of systems, to specify, identify and resolve behavioural inconsistencies. This paper is an exploration of the kinds of inconsistency that can arise in a causal specification, how they can be discovered and how they can be resolved. We distinguish between inconsistencies in the structure of a specification, which are assumed to have been removed previously, andinconsistencies in behaviour which, being dynamic in nature, we describe as conflicts.Our approach concentrates on the identification of conflicts in the specified behaviour of a system. After summarising the causal language, we describe a classification of behavioural conflicts and how they can be identified. We discuss possible methods of resolution, and propose a simple process to aid the identification and resolution of conflicts. A case study using the causal language illustrates our approach.
Analogical Reuse of Requirements Specifications: A Computational Model Specifications of requirements for new software systems can be revised, refined, or completed in reference to specifications of requirements for existing similar systems. Although realized as a form of analogical problem solving, specification by reuse is not adequately supported by available computational models for detecting analogies. This is chiefly due to the following reasons: (1) It is assumed that specifications are expressed according to the same specification model and in a uniform representation scheme. (2) Additional information is needed for the detection of analogies, which is not contained in the specifications. (3) Performance scales poorly with the complexity of specifications. This article presents a computational model for detecting analogies, which addresses these issues to a certain extent. The application of the model in the specification of requirements by analogical reuse is demonstrated through an example, and its sensitivity to the representation of specifications is discussed. Finally, the results of a preliminary empirical evaluation of the model are reported.
Using the WinWin Spiral Model: A Case Study At the 1996 and 1997 International Conferences on Software Engineering, three of the six keynote addresses identified negotiation techniques as the most critical success factor in improving the outcome of software projects. The USC Center for Software Engineering has been developing a negotiation-based approach to software system requirements engineering, architecture, development, and management. This approach has three primary elements: Theory W, a management theory and approach, which says that making winners of the system's key stakeholders is a necessary and sufficient condition for project success. The WinWin spiral model, which extends the spiral software development model by adding Theory W activities to the front of each cycle. WinWin, a groupware tool that makes it easier for distributed stakeholders to negotiate mutually satisfactory (win-win) system specifications. This article describes an experimental validation of this approach, focusing on the application of the WinWin spiral model. The case study involved extending USC's Integrated Library System to access multimedia archives, including films, maps, and videos. The study showed that the WinWin spiral model is a good match for multimedia applications and is likely to be useful for other applications with similar characteristics--rapidly moving technology, many candidate approaches, little user or developer experience with similar systems, and the need for rapid completion.
Capturing more world knowledge in the requirements specification The view is adopted that software requirements involve the representation (modeling) of considerable real-world knowledge, not just functional specifications. A framework (RMF) for requirements models is presented and its main features are illustrated. RMF allows information about three types of conceptual entities (objects, activities, and assertions) to be recorded uniformly using the notion of properties. By grouping all entities into classes or metaclasses, and by organizing classes into generalization (specialization) hierarchies, RMF supports three abstraction principles (classification, aggregation, and generalization) which appear to be of universal importance in the development and organization of complex descriptions. Finally, by providing a mathematical model underlying our terminology, we achieve both unambiguity and the potential to verify consistency of the model.
A Comparison of Languages which Operationalize and Formalise KADS Models of Expertise In the field of knowledge engineering, dissatisfaction with the rapid-prototyping approach has led to a number of more principled methodologies for the construction of knowledge-based systems. Instead of immediately implementing the gathered and interpreted knowledge in a given implementation formalism according to the rapid-prototyping approach, many such methodologies centre around the notion of a conceptual model: an abstract, implementation independent description of the relevant problem solving expertise. A conceptual model should describe the task which is solved by the system and the knowledge which is required by it. Although such conceptual models have often been formulated in an informal way, recent years have seen the advent of formal and operational languages to describe such conceptual models more precisely, and operationally as a means for model evaluation. In this paper, we study a number of such formal and operational languages for specifying conceptual models. To enable a meaningful comparison of such languages, we focus on languages which are all aimed at the same underlying conceptual model, namely that from the KADS method for building KBS. We describe eight formal languages for KADS models of expertise, and compare these languages with respect to their modelling primitives, their semantics, their implementations and their applications, Future research issues in the area of formal and operational specification languages for KBS are identified as the result of studying these languages. The paper also contains an extensive bibliography of research in this area.
Noesis: Towards a situational method engineering technique Standard methods as such are not normally used for information system development. The particular circumstances of each project make it necessary to adapt the methods to deal with the situation at hand. This is the concern of situational method engineering, where the term situational method is used to refer to a method tailored to the needs of a particular development setting. Situational method engineering prescribes the performance of this method customization within the framework of a meta-modelling technique provided with mechanisms to manipulate methods (or fragments of them) for their modification, integration, adaptation or evolution. As a first step towards the definition of a situational method engineering technique, in this paper we propose the Noesis meta-modelling technique together with a complete and minimal family of transformations. The Noesis technique allows recursive and decompositional structures to be captured in the meta-models (which is a demandable requirement for meta-modelling techniques) and situational methods to be obtained by the assembly of method fragments. In addition, the family of transformations allows method fragment customization processes to be accomplished. The main contribution of this paper is the definition of this family and the proof of its completeness and minimality (which is an important open issue with respect to customization of method fragments), the Noesis technique being the scaffolding needed to show this.
Deterministic vector long-term forecasting for fuzzy time series In the last decade, fuzzy time series have received more attention due their ability to deal with the vagueness and incompleteness inherent in time series data. Although various improvements, such as high-order models, have been developed to enhance the forecasting performance of fuzzy time series, their forecasting capability is mostly limited to short-term time spans and the forecasting of a single future value in one step. This paper presents a new method to overcome this shortcoming, called deterministic vector long-term forecasting (DVL). The proposed method, built on the basis of our previous deterministic forecasting method that does not require the overhead of determining the order number, as in other high-order models, utilizes a vector quantization technique to support forecasting if there are no matching historical patterns, which is usually the case with long-term forecasting. The vector forecasting method is further realized by seamlessly integrating it with the sliding window scheme. Finally, the forecasting effectiveness and stability of DVL are validated and compared by performing Monte Carlo simulations on real-world data sets.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.210112
0.210112
0.061461
0.047803
0.005001
0.000154
0.000133
0.000067
0.000033
0.000006
0
0
0
0
Algorithms for drawing graphs: an annotated bibliography Several data presentation problems involve drawing graphs so that they are easy to read and understand. Examples include circuit schematics and software engineering diagrams. In this paper we present a bibliographic survey on algorithms whose goal is to produce aesthetically pleasing drawings of graphs. Research on this topic is spread over the broad spectrum of Computer Science. This bibliography constitutes an attempt to encompass both theoretical and application oriented papers from disparate areas.
Drawing Clustered Graphs on an Orthogonal Grid Clustered graphs are graphs with recursive clustering structures over the vertices. For graphical representation, the clustering structure is rep- resented by a simple region that contains the drawing of all the vertices which belong to that cluster. In this paper, we present an algorithm which produces planar drawings of clustered graphs in a convention known as orthogonal grid rectangular cluster drawings. If the input graph has n vertices, then the algorithm produces in O(n) time a drawing with O(n2) area and at most 3 bends in each edge. This result is as good as existing results for classical planar graphs. Further, we show that our algorithm is optimal in terms of the number of bends per edge.
Planarity for Clustered Graphs In this paper, we introduce a new graph model known as clustered graphs, i.e. graphs with recursive clustering structures. This graph model has many applications in informational and mathematical sciences. In particular, we study C-planarity of clustered graphs. Given a clustered graph, the C-planarity testing problem is to determine whether the clustered graph can be drawn without edge crossings, or edge-region crossings. In this paper, we present efficient algorithms for testing C-planarity and finding C-planar embeddings of clustered graphs.
Randomized graph drawing with heavy-duty preprocessing We present a graph drawing system for general undirected graphs with straight-line edges. It carries out a rather complex set of preprocessing steps, designed to produce a topologically good, but not necessarily nice-looking layout, which is then subjected to Davidson and Harel's simulated annealing beautification algorithm. The intermediate layout is planar for planar graphs and attempts to come close to planar for nonplanar graphs. The system's results are significantly better, and much faster, than what the annealing approach is able to achieve on its own.
Miro: Visual Specification of Security Miro is a set of languages and tools that support the visual specification of file system security. Two visual languages are presented: the instance language, which allows specification of file system access, and the constraint language, which allows specification of security policies. Miro visual languages and tools are used to specify security configurations. A visual language is one whose entities are graphical, such as boxes and arrows, specifying means stating independently of any implementation the desired properties of a system. Security means file system protection: ensuring that files are protected from unauthorized access and granting privileges to some users, but not others. Tools implemented and examples of how these languages can be applied to real security specification problems are described.
On Diagram Tokens and Types Rejecting the temptation to make up a list of necessary and sufficient conditions for diagrammatic and sentential systems, we present an important distinction which arises from sentential and diagrammatic features of systems. Importantly, the distinction we will explore in the paper lies at a meta-level. That is, we argue for a major difference in meta-theory between diagrammatic and sentential systems, by showing the necessity of a more fine-grained syntax for a diagrammatic system than for a sentential system. Unlike with sentential systems, a diagrammatic system requires two levels of syntax--token and type. Token-syntax is about particular diagrams instantiated on some physical medium, and type-syntax provides a formal definition with which a concrete representtation of a diagram must comply. While these two levels of syntax are closely related, the domains of type-syntax and token-syntax are distinct from each other. Euler diagrams are chosen as a case study to illustrate the following major points of the paper: (i) What kinds of diagrammatic features (as opposed to sentential features) require two different levels of syntax? (ii) What is the relation between these two levels of syntax? (iii) What is the advantage of having a two-tiered syntax?
ENIAM: a more complete conceptual schema language
Hypertext: An Introduction and Survey First Page of the Article
Proving entailment between conceptual state specifications The lack of expressive power of temporal logic as a specification language can be compensated to a certain extent by the introduction of powerful, high-level temporal operators, which are difficult to understand and reason about. A more natural way to increase the expressive power of a temporal specification language is by introducing conceptual state variables , which are auxiliary (unimplemented) variables whose values serve as an abstract representation of the internal state of the process being specified. The kind of specifications resulting from the latter approach are called conceptual state specifications . This paper considers a central problem in reasoning about conceptual state specifications: the problem of proving entailment between specifications. A technique, based on the notion of simulation between machines , is shown to be sound for proving entailment. A kind of completeness result can also be shown if specifications are assumed to satisfy well-formedness conditions. The role played by entailment in proofs of correctness is illustrated by the problem of proving that the concatenation of two FIFO buffers implements a FIFO buffer.
An example of stepwise refinement of distributed programs: quiescence detection We propose a methodology for the development of concurrent programs and apply it to an important class of problems: quiescence detection. The methodology is based on a novel view of programs. A key feature of the methodology is the separation of concerns between the core problem to be solved and details of the forms of concurrency employed in the target architecture and programming language. We begin development of concurrent programs by ignoring issues dealing with concurrency and introduce such concerns in manageable doses. The class of problems solved includes termination and deadlock detection.
Specware: Formal Support for Composing Software
Requirements definition and its interface to the SARA design methodology for computer-based systems This paper presents results of efforts during 1979--1981 to integrate and enhance the work of the System ARchitects Apprentice (SARA) Project at UCLA and the Information System Design Optimization System (ISDOS) Project at the University of Michigan. While expressing a need for a requirements definition subsystem, SARA had no appropriate requirements definition language, no defined set of requirements analysis techniques or tools, and no procedures to form a more cohesive methodology for linking computer system requirements to the ensuing design. Research has been performed to fill this requirements subsystem gap, using concepts derived from the ISDOS project as a basis for departure.
A proof-based approach to verifying reachability properties This paper presents a formal approach to proving temporal reachability properties, expressed in CTL, on B systems. We are particularly interested in demonstrating that a system can reach a given state by executing a sequence of actions (or operation calls) called a path. Starting with a path, the proposed approach consists in calculating the proof obligations to discharge in order to prove that the path allows the system to evolve in order to verify the desired property. Since these proof obligations are expressed as first logic formulas without any temporal operator, they can be discharged using the prover of AtelierB. Our proposal is illustrated through a case study.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.023216
0.006515
0.006349
0.005843
0.003339
0.000339
0.000051
0.000003
0
0
0
0
0
0
Investigating System Survivability From A Probabilistic Perspective Survivability is an essential requirement of the networked information systems analogous to the dependability. The definition of survivability proposed by Knight in [16] provides a rigorous way to define the concept. However, the Knight's specification does not provide a behavior model of the system as well as a verification framework for determining the survivability of a system satisfying a given specification. This paper proposes a complete formal framework for specifying and verifying the concept of system survivability on the basis of Knight's research. A computable probabilistic model is proposed to specify the functions and services of a networked information system. A quantified survivability specification is proposed to indicate the requirement of the survivability. A probabilistic refinement relation is defined to determine the survivability of the system. The framework is then demonstrated with three case studies: the restaurant system (RES), the Warship Command and Control system (LWC) and the Command-and-Control (C2) system.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Time Series Forecasting Using Hybrid Neuro-Fuzzy Regression Model During the past few decades various time-series forecasting methods have been developed for financial market forecasting leading to improved decisions and investments. But accuracy remains a matter of concern in these forecasts. The quest is thus on improving the effectiveness of time-series models. Artificial neural networks (ANN) are flexible computing paradigms and universal approximations that have been applied to a wide range of forecasting problems with high degree of accuracy. However, they need large amount of historical data to yield accurate results. The real world situation experiences uncertain and quick changes, as a result of which future situations should be forecasted using small amount of data from a short span of time. Therefore, forecasting in these situations requires techniques that work efficiently with incomplete data for which Fuzzy sets are ideally suitable. In this work, a hybrid Neuro-Fuzzy model combining the advantages of ANN and Fuzzy regression is developed to forecast the exchange rate of US Dollar to Indian Rupee. The model yields more accurate results with fewer observations and incomplete data sets for both point and interval forecasts. The empirical results indicate that performance of the model is comparatively better than other models which make it an ideal candidate for forecasting and decision making.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Towards the Improvement of Topic Priority Assignment Using Various Topic Detection Methods for E-reputation Monitoring on Twitter.
Exploiting Wikipedia for Entity Name Disambiguation in Tweets.
Automatic Classification and PLS-PM Modeling for Profiling Reputation of Corporate Entities on Twitter In this paper, we address the task of detecting the reputation alert in social media updates, that is, deciding whether a new-coming content has strong and immediate implications for the reputation of a given entity. This content is also submitted to a standard typology of reputation dimensions that consists in a broad classification of the aspects of an under public audience company. Reputation manager needs a realtime database and method to report what is happening right now to his brand. However, typical Natural Language Processing (NLP) approaches to these tasks require external resources and show non-relational modeling. We propose a fast supervised approach for extracting textual features, which we use to train simple statistical reputation classifiers. These classifiers outputs are used in a Partial Least Squares Path Modeling (PLS-PM) system to model the reputation. Experiments on the RepLab 2013 and 2014 collections show that our approaches perform as well as the state-of-the-art more complex methods.
REINA at RepLab2013 Topic Detection Task: Community Detection.
Distributed Representations of Words and Phrases and their Compositionality. The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.
Adding semantics to microblog posts Microblogs have become an important source of information for the purpose of marketing, intelligence, and reputation management. Streams of microblogs are of great value because of their direct and real-time nature. Determining what an individual microblog post is about, however, can be non-trivial because of creative language usage, the highly contextualized and informal nature of microblog posts, and the limited length of this form of communication. We propose a solution to the problem of determining what a microblog post is about through semantic linking: we add semantics to posts by automatically identifying concepts that are semantically related to it and generating links to the corresponding Wikipedia articles. The identified concepts can subsequently be used for, e.g., social media mining, thereby reducing the need for manual inspection and selection. Using a purpose-built test collection of tweets, we show that recently proposed approaches for semantic linking do not perform well, mainly due to the idiosyncratic nature of microblog posts. We propose a novel method based on machine learning with a set of innovative features and show that it is able to achieve significant improvements over all other methods, especially in terms of precision.
V}-Measure: A Conditional Entropy-Based External Cluster Evaluation Measure
Object-oriented development in an industrial environment Object-oriented programming is a promising approach to the industrialization of the software development process. However, it has not yet been incorporated in a development method for large systems. The approaches taken are merely extensions of well-known techniques when 'programming in the small' and do not stand on the firm experience of existing developments methods for large systems. One such technique called block design has been used within the telecommunication industry and relies on a similar paradigm as object-oriented programming. The two techniques together with a third technique, conceptual modeling used for requirement modeling of information systems, have been unified into a method for the development of large systems.
Constraint logic programming for reasoning about discrete event processes The purpose of this paper is to show that constraint logic programming is a useful computational logic for modeling, simulating, and verifying real-time discrete event processes. The designer's knowledge about discrete event processes can be represented by a constraint logic program in a fashion that stays close to the mathematical definition of the processes, and can be used to semiautomate verification of possibly infinite-state systems. The constraint language CPL( R ) is used to illustrate verification techniques.
A Study of The Fragile Base Class Problem In this paper we study the fragile base class problem. This problem occurs in open object-oriented systems employing code inher- itance as an implementation reuse mechanism. System developers un- aware of extensions to the system developed by its users may produce a seemingly acceptable revision of a base class which may damage its exten- sions. The fragile base class problem becomes apparent during mainte- nance of open object-oriented systems, but requires consideration during design. We express the fragile base class problem in terms of a flexibility property. By means of ve orthogonal examples, violating the flexibility property, we demonstrate dierent aspects of the problem. We formulate requirements for disciplining inheritance, and extend the renement cal- culus to accommodate for classes, objects, class-based inheritance, and class renement. We formulate and formally prove a flexibility theorem demonstrating that the restrictions we impose on inheritance are suf- cient to permit safe substitution of a base class with its revision in presence of extension classes.
Reflection and semantics in LISP
Visualizing Argument Structure Constructing arguments and understanding them is not easy. Visualization of argument structure has been shown to help understanding and improve critical thinking. We describe a visualization tool for understanding arguments. It utilizes a novel hi-tree based representation of the argument’s structure and provides focus based interaction techniques for visualization. We give efficient algorithms for computing these layouts.
Software size estimation of object-oriented systems The strengths and weaknesses of existing size estimation techniques are discussed. The nature of software size estimation is considered. The proposed method takes advantage of a characteristic of object-oriented systems, the natural correspondence between specification and implementation, in order to enable users to come up with better size estimates at early stages of the software development cycle. Through a statistical approach the method also provides a confidence interval for the derived size estimates. The relation between the presented software sizing model and project cost estimation is also considered.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.071111
0.073333
0.066667
0.038333
0.007407
0.001333
0.00005
0
0
0
0
0
0
0
Refinement-based modeling of the ErbB signaling pathway. The construction of large scale biological models is a laborious task, which is often addressed by adopting iterative routines for model augmentation, adding certain details to an initial high level abstraction of the biological phenomenon of interest. Refitting a model at every step of its development is time consuming and computationally intensive. The concept of model refinement brings about an effective alternative by providing adequate parameter values that ensure the preservation of its quantitative fit at every refinement step. We demonstrate this approach by constructing the largest-ever refinement-based biomodel, consisting of 421 species and 928 reactions. We start from an already fit, relatively small literature model whose consistency we check formally. We then construct the final model through an algorithmic step-by-step refinement procedure that ensures the preservation of the model's fit.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An intrusion detection system integrating network-level intrusion detection and host-level intrusion detection With the rapid development of Internet, the issue of cyber security has increasingly gained more attention. An intrusion Detection System (IDS) is an effective technique to defend cyber-attacks and reduce security losses. However, the challenge of IDS lies in the diversity of cyber-attackers and the frequently-changing data requiring a flexible and efficient solution. To address this problem, machine learning approaches are being applied in the IDS field. In this paper, we propose an efficient scalable neural-network-based hybrid IDS framework with the combination of Host-level IDS (HIDS) and Network-level IDS (NIDS). We applied the autoencoders (AE) to NIDS and designed HIDS using word embedding and convolutional neural network. To evaluate the IDS, many experiments are performed on the public datasets NSL-KDD and ADFA. It can detect many attacks and reduce the security risk with high efficiency and excellent scalability.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Escaping the software tar pit: model clashes and how to avoid them "No scene from prehistory is quite so vivid as that of the mortal struggles of great beasts in the tar pits… Large system programming has over the past decade been such a tar pit, and many great and powerful beasts have thrashed violently in it…"Everyone seems to have been surprised by the stickiness of the problem, and it is hard to discern the nature of it. But we must try to understand it if we are to solve it."Fred Brooks, 1975Several recent books and reports have confirmed that the software tar pit is at least as hazardous today as it was in 1975. Our research into several classes of models used to guide software development (product models, process models, property models, success models), has convinced us that the concept of model clashes among these classes of models helps explain much of the stickiness of the software tar-pit problem.We have been developing and experimentally evolving an approach called MBASE -- Model-Based (System) Architecting and Software Engineering -- which helps identify and avoid software model clashes. Section 2 of this paper introduces the concept of model clashes, and provides examples of common clashes for each combination of product, process, property, and success model. Sections 3 and 4 introduce the MBASE approach for endowing a software project with a mutually supportive set of models, and illustrate the application of MBASE to an example corporate resource scheduling system. Section 5 summarizes the results of applying the MBASE approach to a family of small digital library projects. Section 6 presents conclusions to date.
Using the WinWin Spiral Model: A Case Study At the 1996 and 1997 International Conferences on Software Engineering, three of the six keynote addresses identified negotiation techniques as the most critical success factor in improving the outcome of software projects. The USC Center for Software Engineering has been developing a negotiation-based approach to software system requirements engineering, architecture, development, and management. This approach has three primary elements: Theory W, a management theory and approach, which says that making winners of the system's key stakeholders is a necessary and sufficient condition for project success. The WinWin spiral model, which extends the spiral software development model by adding Theory W activities to the front of each cycle. WinWin, a groupware tool that makes it easier for distributed stakeholders to negotiate mutually satisfactory (win-win) system specifications. This article describes an experimental validation of this approach, focusing on the application of the WinWin spiral model. The case study involved extending USC's Integrated Library System to access multimedia archives, including films, maps, and videos. The study showed that the WinWin spiral model is a good match for multimedia applications and is likely to be useful for other applications with similar characteristics--rapidly moving technology, many candidate approaches, little user or developer experience with similar systems, and the need for rapid completion.
Identifying Quality-Requirement Conflicts Despite well-specified functional and interface requirements, many software projects have failed because they had a poor set of quality-attribute requirements. To find the right balance of quality-attribute requirements, you must identify the conflicts among desired quality attributes and work out a balance of attribute satisfaction. We have developed The Quality Attribute Risk and Conflict Consultant, a knowledge-based tool that can be used early in the system life cycle to identify potential conflicts. QARCC operates in the context of the WinWin system, a groupware support system that determines software and system requirements as negotiated win conditions. This article summarizes our experiences developing the QARCC-1 prototype using an early version of WinWin, and our integration of the resulting improvements into QARCC-2.
Software requirements as negotiated win conditions Current processes and support systems for software requirements determination and analysis often neglect the critical needs of important classes of stakeholders, and limit themselves to the concerns of the developers, users and customers. These stakeholders can include maintainers, interfacers, testers, product line managers, and sometimes members of the general public. This paper describes the results to date in researching and prototyping a next-generation process model (NGPM) and support system (NGPSS) which directly addresses these issues. The NGPM emphasizes collaborative processes, involving all of the significant constituents with a stake in the software product. Its conceptual basis is a set of “theory W” (win-win) extensions to the spiral model of software development
For large meta information of national integrated statistics Integrated statistics, synthesized from many survey statistics, form an important part of government statistics. Its typical example is the System of National Accounts. To develop such a system, it is necessary to make consistent preparation of 1) documents of methods, 2) programs, and 3) a database. However, it is usually not easy because of the large amount of data types connected with the system. In this paper, we formulate a language as a means to supporting the design of statistical data integration. This language is based on the data abstraction model and treats four types of semantic hierarchies; generalization, derivation, association (aggregation) and classification. We demonstrate that this language leads to natural documentation of statistical data integration, and meta information, used in both programs and a database for the integration, can be generated from the documents.
SA-ER: A Methodology that Links Structured Analysis and Entity-Relationship Modeling for Database Design
Automating the Transformational Development of Software This paper reports on efforts to extend the transformational implementation (TI) model of software development [1]. In particular, we describe a system that uses AI techniques to automate major portions of a transformational implementation. The work has focused on the formalization of the goals, strategies, selection rationale, and finally the transformations used by expert human developers. A system has been constructed that includes representations for each of these problem-solving components, as well as machinery for handling human-system interaction and problem-solving control. We will present the system and illustrate automation issues through two annotated examples.
A Modeling Foundation for a Second Generation System Engineering Tool
Distributed snapshots: determining global states of distributed systems This paper presents an algorithm by which a process in a distributed system determines a global state of the system during a computation. Many problems in distributed systems can be cast in terms of the problem of detecting global states. For instance, the global state detection algorithm helps to solve an important class of problems: stable property detection. A stable property is one that persists: once a stable property becomes true it remains true thereafter. Examples of stable properties are “computation has terminated,” “ the system is deadlocked” and “all tokens in a token ring have disappeared.” The stable property detection problem is that of devising algorithms to detect a given stable property. Global state detection can also be used for checkpointing.
A correctness proof of a topology information maintenance protocol for a distributed computer network In order for the nodes of a distributed computer network to communicate, each node must have information about the network's topology. Since nodes and links sometimes crash, a scheme is needed to update this information. One of the major constraints on such a topology information scheme is that it may not involve a central controller. The Topology Information Protocol that was implemented on the MERIT Computer Network is presented and explained; this protocol is quite general and could be implemented on any computer network. It is based on Baran's “Hot Potato Heuristic Routing Doctrine.” A correctness proof of this Topology Information Protocol is also presented.
Using Abstraction and Model Checking to Detect Safety Violations in Requirements Specifications Exposing inconsistencies can uncover many defects in software specifications. One approach to exposing inconsistencies analyzes two redundant specifications, one operational and the other property-based, and reports discrepancies. This paper describes a "practical" formal method, based on this approach and the SCR (Software Cost Reduction) tabular notation, that can expose inconsistencies in software requirements specifications. Because users of the method do not need advanced mathematical training or theorem proving skills, most software developers should be able to apply the method without extraordinary effort. This paper also describes an application of the method which exposed a safety violation in the contractor-produced software requirements specification of a sizable, safety-critical control system. Because the enormous state space of specifications of practical software usually renders direct analysis impractical, a common approach is to apply abstraction to the specification. To reduce the state space of the control system specification, two "pushbutton" abstraction methods were applied, one which automatically removes irrelevant variables and a second which replaces the large, possibly infinite, type sets of certain variables with smaller type sets. Analyzing the reduced specification with the model checker Spin uncovered a possible safety violation. Simulation demonstrated that the safety violation was not spurious but an actual defect in the original specification.
Lossless compression of multispectral image data While spatial correlations are adequately exploited by standard lossless image compression techniques, little success has been attained in exploiting spectral correlations when dealing with multispectral image data. The authors present some new lossless image compression techniques that capture spectral correlations as well as spatial correlation in a simple and elegant manner. The schemes are based on the notion of a prediction tree, which defines a noncausal prediction model for an image. The authors present a backward adaptive technique and a forward adaptive technique. They then give a computationally efficient way of approximating the backward adaptive technique. The approximation gives good results and is extremely easy to compute. Simulation results show that for high spectral resolution images, significant savings can be made by using spectral correlations in addition to spatial correlations. Furthermore, the increase in complexity incurred in order to make these gains is minimal
On Teaching Visual Formalisms A graduate course on visual formalisms for reactive systems emphasized using such languages for not only specification and requirements but also (and predominantly) actual execution. The course presented two programming approaches: an intra-object approach using statecharts and an interobject approach using live sequence charts. Using each approach, students built a small system of their choice and then combined the two systems.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2496
0.0624
0.0096
0.002054
0.000185
0.000062
0.000001
0
0
0
0
0
0
0
Infinite-Alphabet Prefix Codes Optimal for beta-Exponential Penalties Let P = {p(i)} be a measure of strictly positive probabilities on the set of nonnegative integers. Although the countable number of inputs prevents usage of the Huffman algorithm, there are nontrivial P for which known methods find a source code that is optimal in the sense of minimizing expected codeword length. For some applications, however, a source code should instead minimize one of a family of nonlinear objective functions, �-exponential means, those of the form loga P i p(i)a n(i) , where n(i) is the length of the ith codeword and a is a positive constant. Applications of such minimizations include a problem of maximizing the chance of message receipt in single-shot communications (a < 1) and a problem of minimizing the chance of buffer overflow in a queueing system (a > 1). This paper introduces methods for finding codes optimal for such exponential means. One method applies to geometric distributions, while another applies to distributions wit h lighter tails. The latter algorithm is applied to Poisson distribut ions. Both are extended to minimizing maximum pointwise redundancy.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Refinement Theory that Supports Reasoning About Knowledge and Time An expressive semantic framework for program refinement that supports both temporal reasoning and reasoning about the knowledge of multiple agents is developed. The refinement calculus owes the cleanliness of its decomposition rules for all programming language constructs and the relative simplicity of its semantic model to a rigid synchrony assumption which requires all agents and the environment to proceed in lockstep. The new features of the calculus are illustrated in a derivation of the two-phase-commit protocol.
Automating refinement checking in probabilistic system design Refinement plays a crucial role in "top-down" styles of verification, such as the refinement calculus, but for probabilistic systems proof of refinement is a particularly challenging task due to the combination of probability and nondeterminism which typically arises in partially-specified systems. Whilst the theory of probabilistic refinement is well-known [18] there are few tools to help with establishing refinements between programs. In this paper we describe a tool which provides partial support during refinement proofs. The tool essentially builds small models of programs using an algebraic rewriting system to extract the overall probabilistic behaviour. We use that behaviour to recast refinement-checking as a linear satisfiability problem, which can then be exported to a linear arithmetic solver. One of the major benefits of this approach is the ability to generate counter examples, alerting the prover to a problem in a proposed refinement. We demonstrate the technique on a small case study based on Schneider et al.'s Tank Monitoring [26].
Demonic, angelic and unbounded probabilistic choices in sequential programs Probabilistic predicate transformers extend standard predicate transformers by adding probabilistic choice to (transformers for) sequential programs; demonic nondeterminism is retained. For finite state spaces, the basic theory is set out elsewhere [17], together with a presentation of the probabilistic 'healthiness conditions' that generalise the 'positive conjunctivity' of ordinary predicate transformers. Here we expand the earlier results beyond ordinary conjunctive transformers, investigating the structure of the transformer space more generally: as Back and von Wright [1] did for the standard (non-probabilistic) case, we nest deterministic, demonic and demonic/angelic transformers, showing how each subspace can be constructed from the one before. We show also that the results hold for infinite state spaces. In the end we thus find characteristic healthiness conditions for the hierarchies of a system in which deterministic, demonic, probabilistic and angelic choices all coexist.
Qualitative probabilistic modelling in event-B Event-B is a notation and method for discrete systems modelling by refinement. We introduce a small but very useful construction: qualitative probabilistic choice. It extends the expressiveness of Event-B allowing us to prove properties of systems that could not be formalised in Event-B before. We demonstrate this by means of a small example, part of a larger Event-B development that could not be fully proved before. An important feature of the introduced construction is that it does not complicate the existing Event-B notation or method, and can be explained without referring to the underlying more complicated probabilistic theory. The necessary theory [18] itself is briefly outlined in this article to justify the soundness of the proof obligations given. We also give a short account of alternative constructions that we explored, and rejected.
The Generalised Substitution Language Extended to Probabilistic Programs Let predicate P be converted from Boolean to numeric type by writing h P i , with h falsei being 0 and h truei being 1, so that in a degenerate sense h P i can be regarded as 'the probability that P holds in the current state'. Then add explicit numbers and arithmetic operators, to give a richer language of arithmetic formulae into which predicates are embedded by hi . Abrial's generalised substitution language GSL can be applied to arith- metic rather than Boolean formulae with little extra effort. If we add a new operator p⊕ for probabilistic choice, it then becomes 'pGSL': a smooth extension of GSL that includes random algorithms within its scope.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Script: a communication abstraction mechanism and its verification In this paper, we introduce a new abstraction mechanism, called a script , which hides the low-level details that implement patterns of communication . A script localizes the communication between a set of roles (formal processes), to which actual processes enroll to participate in the action of the script. The paper discusses the addition of scripts to the languages CSP and ADA, and to a shared-variable language with monitors. Proof rules are presented for proving partial correctness and freedom from deadlock in concurrent programs using scripts.
Automated consistency checking of requirements specifications This article describes a formal analysis technique, called consistency checking, for automatic detection of errors, such as type errors, nondeterminism, missing cases, and circular definitions, in requirements specifications. The technique is designed to analyze requirements specifications expressed in the SCR (Software Cost Reduction) tabular notation. As background, the SCR approach to specifying requirements is reviewed. To provide a formal semantics for the SCR notation and a foundation for consistency checking, a formal requirements model is introduced; the model represents a software system as a finite-state automation which produces externally visible outputs in response to changes in monitored environmental quantities. Results of two experiments are presented which evaluated the utility and scalability of our technique for consistency checking in real-world avionics application. The role of consistency checking during the requirements phase of software development is discussed.
Further Improvement of Free-Weighting Matrices Technique for Systems With Time-Varying Delay A novel method is proposed in this note for stability analysis of systems with a time-varying delay. Appropriate Lyapunov functional and augmented Lyapunov functional are introduced to establish some improved delay-dependent stability criteria. Less conservative results are obtained by considering the additional useful terms (which are ignored in previous methods) when estimating the upper bound of the derivative of Lyapunov functionals and introducing the new free-weighting matrices. The resulting criteria are extended to the stability analysis for uncertain systems with time-varying structured uncertainties and polytopic-type uncertainties. Numerical examples are given to demonstrate the effectiveness and the benefits of the proposed method
Workflow Modeling A discussion of workflow models and process description languages is presented. The relationshipbetween data, function and coordination aspects of the process is discussed, and a claim is made thatmore than one model view (or representation) is needed in order to grasp the complexity of processmodeling.The basis of a new model is proposed, showing that more expressive models can be built by supportingasynchronous events and batch activities, matched by powerfull run-time support.1...
Better knowledge management through knowledge engineering In recent years the term knowledge management has been used to describe the efforts of organizations to capture, store, and deploy knowledge. Most current knowledge management activities rely on database and Web technology; currently, few organizations have a systematic process for capturing knowledge, as distinct from data. The authors present a case study where knowledge engineering practices support knowledge management by a drilling optimization group in a large service company. The case study illustrates three facets of the knowledge management task: First, knowledge is captured by a knowledge acquisition process that uses a conceptual model of aspects of the company's business domain to guide the capture of cases. Second, knowledge is stored using a knowledge representation language to codify the structured knowledge in a number of knowledge bases, which together constitute a knowledge repository. Third, knowledge is deployed by running the knowledge bases in a knowledge server, accessible by on the company intranet.
Visual Query Systems for Databases: A Survey Visual query systems (VQSs) are query systems for databases that use visual representations to depict the domain of interest and express related requests. VQSs can be seen as an evolution of query languages adopted into database management systems; they are designed to improve the effectiveness of the human–computer communication. Thus, their most important features are those that determine the nature of the human–computer dialogue. In order to survey and compare existing VQSs used for querying traditional databases, we first introduce a classification based on such features, namely the adopted visual representations and the interaction strategies. We then identify several user types and match the VQS classes against them, in order to understand which kind of system may be suitable for each kind of user. We also report usability experiments which support our claims. Finally, some of the most important open problems in the VQS area are described.
Matching conceptual graphs as an aid to requirements re-use The types of knowledge used during requirements acquisition are identified and a tool to aid in this process, ReqColl (Requirements Collector) is introduced. The tool uses conceptual graphs to represent domain concepts and attempts to recognise new concepts through the use of a matching facility. The overall approach to requirements capture is first described and the approach to matching illustrated informally. The detailed procedure for matching conceptual graphs is then given. Finally ReqColl is compared to similar work elsewhere and some future research directions indicated.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.2
0.1
0.08
0.066667
0
0
0
0
0
0
0
0
0
Energy-Aware Middleware Adding self-healing capabilities to network management systems holds great promise for delivering important goals, such as QoS, while simultaneously lowering capital expenditure, operation cost, and maintenance cost. In this paper, we present a model-based ...
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Reusing semi-specified behavior models in systems analysis and design As the structural and behavioral complexity of systems has increased, so has interest in reusing modules in early development phases. Developing reusable modules and then weaving them into specific systems has been addressed by many approaches, including plug- and-play software component technologies, aspect-oriented techniques, design patterns, superimposition, and product line techniques. Most of these ideas are expressed in an object- oriented framework, so they reuse behaviors after dividing them into methods that are owned by classes. In this paper, we present a crosscutting reuse approach that applies Object-Process Methodology (OPM). OPM, which unifies system structure and behavior in a single view, supports the notion of a process class that does not belong to and is not encapsulated in an object class, but rather stands alone, capable of getting input objects and producing output objects. The approach features the ability to specify modules generically and concretize them in the target application. This is done in a three-step process: designing generic and target modules, weaving them into the system under development, and refining the combined specification in a way that enables the individual modules to be modified after their reuse. Rules for specifying and combining modules are defined and exemplified, showing the flexibility and benefits of this approach. Index Terms: software reuse, aspect-oriented software engineering, aspect-oriented modeling, Object-Process Methodology, modularity.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Method for Visualizing Complicated Structures Based on Unified Simplification Strategy In this paper, we present a novel force-directed method for automatically drawing intersecting compound mixed graphs (ICMGs) that can express complicated relations among elements such as adjacency, inclusion, and intersection. For this purpose, we take a strategy called unified simplification that can transform layout problem for an ICMG into that for an undirected graph. This method is useful for various information visualizations. We describe definitions, aesthetics, force model, algorithm, evaluation, and applications.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Secrecy Outage Performance of a Cooperative Cognitive Relay Network. In this letter, we analyze the physical layer secrecy performance of a two-hop cooperative cognitive underlay relay network with a decode and forward relay and a passive eavesdropper. Unlike other works to date, we assume combining of direct and relayed signals at the destination and the eavesdropper. A closed-form expression is derived for secrecy outage probability. We show that ignoring the dir...
Cooperative wireless communications: a cross-layer approach This article outlines one way to address these problems by using the notion of cooperation between wireless nodes. In cooperative communications, multiple nodes in a wireless network work together to form a virtual antenna array. Using cooperation, it is possible to exploit the spatial diversity of the traditional MIMO techniques without each node necessarily having multiple antennas. Multihop networks use some form of cooperation by enabling intermediate nodes to forward the message from source to destination. However, cooperative communication techniques described in this article are fundamentally different in that the relaying nodes can forward the information fully or in part. Also the destination receives multiple versions of the message from the source, and one or more relays and combines these to obtain a more reliable estimate of the transmitted signal as well as higher data rates. The main advantages of cooperative communications are presented
On the Performance of Cognitive Underlay Multihop Networks with Imperfect Channel State Information. This paper proposes and analyzes cognitive multihop decode-and-forward networks in the presence of interference due to channel estimation errors. To reduce interference on the primary network, a simple yet effective back-off control power method is applied for secondary multihop networks. For a given threshold of interference probability at the primary network, we derive the maximum back-off control power coefficient, which provides the best performance for secondary multihop networks. Moreover, it is shown that the number of hops for secondary network is upper-bounded under the fixed settings of the primary network. For secondary multihop networks, new exact and asymptotic expressions for outage probability (OP), bit error rate (BER) and ergodic capacity over Rayleigh fading channels are derived. Based on the asymptotic OP and BEP, a pivotal conclusion is reached that the secondary multihop network offers the same diversity order as compared with the network without back off. Finally, we verify the performance analysis through various numerical examples which confirm the correctness of our analysis for many channel and system settings and provide new insight into the design and optimization of cognitive multihop networks.
Robust Secure Beamforming in MISO Full-Duplex Two-Way Secure Communications Considering worst-case channel uncertainties, we investigate the robust secure beamforming design problem in multiple-input-single-output full-duplex two-way secure communications. Our objective is to maximize worst-case sum secrecy rate under weak secrecy conditions and individual transmit power constraints. Since the objective function of the optimization problem includes both convex and concave terms, we propose to transform convex terms into linear terms. We decouple the problem into four optimization problems and employ alternating optimization algorithm to obtain the locally optimal solution. Simulation results demonstrate that our proposed robust secure beamforming scheme outperforms the non-robust one. It is also found that when the regions of channel uncertainties and the individual transmit power constraints are sufficiently large, because of self-interference, the proposed two-way robust secure communication is proactively degraded to one-way communication.
Secure Relaying in Multihop Communication Systems. This letter considers improving end-to-end secrecy capacity of a multihop decode-and-forward relaying system. First, a secrecy rate maximization problem without transmitting artificial noise (AN) is considered, following which the AN-aided secrecy schemes are proposed. Assuming that global channel state information (CSI) is available, an optimal power splitting solution is proposed. Furthermore, an iterative joint optimization of transmit power and power splitting coefficient has also been considered. For the scenario of no eavesdropper's CSI, we provide a suboptimal solution. The simulation results demonstrate that the AN-aided optimal scheme outperforms other schemes.
Artificial Noise-Aided Physical Layer Security in Underlay Cognitive Massive MIMO Systems with Pilot Contamination. In this paper, a secure communication model for cognitive multi-user massive multiple-input multiple-output (MIMO) systems with underlay spectrum sharing is investigated. A secondary (cognitive) multi-user massive MIMO system is operated by using underlay spectrum sharing within a primary (licensed) multi-user massive MIMO system. A passive multi-antenna eavesdropper is assumed to be eavesdropping upon either the primary or secondary confidential transmissions. To this end, a physical layer security strategy is provisioned for the primary and secondary transmissions via artificial noise (AN) generation at the primary base-station (PBS) and zero-forcing precoders. Specifically, the precoders are constructed by using the channel estimates with pilot contamination. In order to degrade the interception of confidential transmissions at the eavesdropper, the AN sequences are transmitted at the PBS by exploiting the excess degrees-of-freedom offered by its massive antenna array and by using random AN shaping matrices. The channel estimates at the PBS and secondary base-station (SBS) are obtained by using non-orthogonal pilot sequences transmitted by the primary user nodes (PUs) and secondary user nodes (SUs), respectively. Hence, these channel estimates are affected by intra-cell pilot contamination. In this context, the detrimental effects of intra-cell pilot contamination and channel estimation errors for physical layer secure communication are investigated. For this system set-up, the average and asymptotic achievable secrecy rate expressions are derived in closed-form. Specifically, these performance metrics are studied for imperfect channel state information (CSI) and for perfect CSI, and thereby, the secrecy rate degradation due to inaccurate channel knowledge and intra-cell pilot contamination is quantified. Our analysis reveals that a physical layer secure communication can be provisioned for both primary and secondary massive MIMO systems even with the channel estimation errors and pilot contamination.
A New Look at Dual-Hop Relaying: Performance Limits with Hardware Impairments. Physical transceivers have hardware impairments that create distortions which degrade the performance of communication systems. The vast majority of technical contributions in the area of relaying neglect hardware impairments and, thus, assume ideal hardware. Such approximations make sense in low-rate systems, but can lead to very misleading results when analyzing future high-rate systems. This paper quantifies the impact of hardware impairments on dual-hop relaying, for both amplify-and-forward and decode-and-forward protocols. The outage probability (OP) in these practical scenarios is a function of the effective end-to-end signal-to-noise-and-distortion ratio (SNDR). This paper derives new closed-form expressions for the exact and asymptotic OPs, accounting for hardware impairments at the source, relay, and destination. A similar analysis for the ergodic capacity is also pursued, resulting in new upper bounds. We assume that both hops are subject to independent but non-identically distributed Nakagami-m fading. This paper validates that the performance loss is small at low rates, but otherwise can be very substantial. In particular, it is proved that for high signal-to-noise ratio (SNR), the end-to-end SNDR converges to a deterministic constant, coined the SNDR ceiling, which is inversely proportional to the level of impairments. This stands in contrast to the ideal hardware case in which the end-to-end SNDR grows without bound in the high-SNR regime. Finally, we provide fundamental design guidelines for selecting hardware that satisfies the requirements of a practical relaying system.
The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
Constraint logic programming for reasoning about discrete event processes The purpose of this paper is to show that constraint logic programming is a useful computational logic for modeling, simulating, and verifying real-time discrete event processes. The designer's knowledge about discrete event processes can be represented by a constraint logic program in a fashion that stays close to the mathematical definition of the processes, and can be used to semiautomate verification of possibly infinite-state systems. The constraint language CPL( R ) is used to illustrate verification techniques.
Software process modeling: principles of entity process models
Animation of Object-Z Specifications with a Set-Oriented Prototyping Language
3rd international workshop on software evolution through transformations: embracing change Transformation-based techniques such as refactoring, model transformation and model-driven development, architectural reconfiguration, etc. are at the heart of many software engineering activities, making it possible to cope with an ever changing environment. This workshop provides a forum for discussing these techniques, their formal foundations and applications.
One VM to rule them all Building high-performance virtual machines is a complex and expensive undertaking; many popular languages still have low-performance implementations. We describe a new approach to virtual machine (VM) construction that amortizes much of the effort in initial construction by allowing new languages to be implemented with modest additional effort. The approach relies on abstract syntax tree (AST) interpretation where a node can rewrite itself to a more specialized or more general node, together with an optimizing compiler that exploits the structure of the interpreter. The compiler uses speculative assumptions and deoptimization in order to produce efficient machine code. Our initial experience suggests that high performance is attainable while preserving a modular and layered architecture, and that new high-performance language implementations can be obtained by writing little more than a stylized interpreter.
New results on stability analysis for systems with discrete distributed delay The integral inequality technique is widely used to derive delay-dependent conditions, and various integral inequalities have been developed to reduce the conservatism of the conditions derived. In this study, a new integral inequality was devised that is tighter than existing ones. It was used to investigate the stability of linear systems with a discrete distributed delay, and a new stability condition was established. The results can be applied to systems with a delay belonging to an interval, which may be unstable when the delay is small or nonexistent. Three numerical examples demonstrate the effectiveness and the smaller conservatism of the method.
1.24
0.24
0.24
0.24
0.24
0.24
0.08
0
0
0
0
0
0
0
A Requirement-Driven Architecture Definition Approach For Conceptual Design Of Mechatronic Systems Designers specify design requirements and determine the interconnections between the design requirements and main components of the system architecture during the conceptual design phase. To define the architecture of a mechatronic system, two research issues, namely system decomposition and component selection, should receive special attention. However, the manner in which to realise system decomposition and component selection by using the requirement specification results, to achieve an appropriate system architecture, has seldom been investigated in existing studies. Therefore, the authors present a requirement-driven architecture definition approach to solve the system decomposition and component selection problems. A well-formulated specification of design requirements is proposed, which classifies design requirements into functional and nonfunctional requirements. Moreover, a decomposition method based on the functional requirements is presented to help designers realise system decomposition, while a component selection method based on the non-functional requirements is proposed to help designers select the most suitable components. The lunar roving vehicle and automated ceramic matrix composite materials cutting system are selected as two case studies to demonstrate the application of the proposed approach in both the academic and industrial domains.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Creating a Design Science of Human-Computer Interaction An increasingly important task of computer science is to support the analysis and design of computers as things to learn from, as tools to use in one's work, as media for interacting with other people. Human-computer interaction (HCI) is the speciality area that addresses this task. Through the past two decades, HCI has pursued a broad and ambitious scientific agenda, progressively integrating its research concerns with the contexts of system development and use. This has created an unprecedented opportunity to manage the emergence of new technology so as to support socially responsive objectives.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Visual feedback for validation of informal specifications In automatically synthesizing simulation models from informal specifications, the ambiguity of natural language (English) leads to multiple interpretations The authors report on a system, called the Model Generator, which provides visual feedback showing the interpretation of specification statements that have been automatically translated to a knowledge representation called conceptual graphs. The visual feedback is based on a combination of block diagrams and Petri net graphs
Automated assists to the behavioral modeling process The coding of behavioral models is a time consuming and error prone process. In this paper the authors describe automated assists to the behavioral modeling process which reduce the coding time and result in models which have a well defined structure making it easier to insure their accuracy. The approach uses a particular graphical representation for the model. An interactive tool then assists in converting the graphical representation to the behavioral HDL code. The authors discuss a pictorial representation for VHDL behavioral models. In VHDL an architectural body is used to define the behavior of a device. These architectural bodies are a set of concurrently running process. These processes are either process blocks or various forms of the signal assignment statements. One can give a pictorial representation to a behavioral architectural body by means of a process model graph (PMG)
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Mapping design knowledge from multiple representations The requirements and specifications documents which initiate and control design and development projects typically use a variety of formal and informal notational systems. The goal of the research reported is to automatically interpret requirement documents expressed in a variety of notations and to integrate the interpretations in order to support requirements analysis and synthesis from them. Because the source notations include natural language, a form of semantic net called conceptual graphs is adopted as the intermediate knowledge representation for expressing interpretations and integrating them. The focus is to describe the interpretation or mapping of a few requirements notations to conceptual graphs, and to indicate the process of joining these interpretations
Toward synthesis from English descriptions This paper reports on a research project to design a system for automatically interpreting English specifications of digital systems in terms of design representation formalisms currently employed in CAD systems. The necessary processes involve the machine analysis of English and the synthesis of models from the specifications. The approach being investigated is interactive and consists of syntactic scanning, semantic analysis, interpretation generation, and model integration.
Visualization of structural information: automatic drawing of compound digraphs An automatic method for drawing compound digraphs that contain both inclusion edges and adjacency edges are presented. In the method vertices are drawn as rectangles (areas for texts, images, etc.), inclusion edges by the geometric inclusion among the rectangles, and adjacency edges by arrows connecting them. Readability elements such as drawing conventions and rules are identified, and a heuristic algorithm to generate readable diagrams is developed. Several applications are shown to demonstrate the effectiveness of the algorithm. The utilization of curves to improve the quality of diagrams is investigated. A possible set of command primitives for progressively organizing structures within this graph formalism is discussed. The computational time for the applications shows that the algorithm achieves satisfactory performance
Program Construction by Parts . Given a specification that includes a number of user requirements,we wish to focus on the requirements in turn, and derive a partlydefined program for each; then combine all the partly defined programsinto a single program that satisfies all the requirements simultaneously.In this paper we introduces a mathematical basis for solving this problem;and we illustrate it by means of a simple example.1 Introduction and MotivationWe propose a program construction method whereby, given a...
Synergiy as an Hybrid Object-Oriented Conceptual Graph Language This paper presents the use of Synergy as an Hybrid Object-Oriented Conceptual Graph Language (HOO-CGL). Synergy is an implemented visual multi-paradigm language based on executable conceptual graphs with an activation interpretation, instead of a logical one. This paper describes the formulation in Synergy of basic concepts of the hybrid object-oriented paradigm: encapsulation, definition of a class with methods and daemons, method and daemon defmitions, class hierarchy, instance and instantiation mechanism, inheritance (both property and method inheritance), method call, method execution and daemon invocation due to accessing data. An example is used to illustrate the presentation of such an Hybrid Object-Oriented Conceptual Graph Language.
Knowledge management technologies and applications—literature review from 1995 to 2002 This paper surveys knowledge management (KM) development using a literature review and classification of articles from 1995 to 2002 with keyword index in order to explore how KM technologies and applications have developed in this period. Based on the scope of 234 articles of knowledge management applications, this paper surveys and classifies KM technologies using the seven categories as: KM framework, knowledge-based systems, data mining, information and communication technology, artificial intelligence/expert systems, database technology, and modeling, together with their applications for different research and problem domains. Some discussion is presented, indicating future development for knowledge management technologies and applications as the followings: (1) KM technologies tend to develop towards expert orientation, and KM applications development is a problem-oriented domain. (2) Different social studies methodologies, such as statistical method, are suggested to implement in KM as another kind of technology. (3) Integration of qualitative and quantitative methods, and integration of KM technologies studies may broaden our horizon on this subject. (4) The ability to continually change and obtain new understanding is the power of KM technologies and will be the application of future works.
Systems analysis: a systemic analysis of a conceptual model Adopting an appropriate model for systems analysis, by avoiding a narrow focus on the requirements specification and increasing the use of the systems analyst's knowledge base, may lead to better software development and improved system life-cycle management.
A Tool For Task-Based Knowledge And Specification Acquisition Knowledge acquisition has been identified as the bottleneck for knowledge engineering. One of the reasons is the lack of an integrated methodology that is able to provide tools and guidelines for the elicitation of knowledge as well as the verification and validation of the system developed. Even though methods that address this issue have been proposed, they only loosely relate knowledge acquisition to the remaining part of the software development fife cycle. To alleviate this problem, we have developed a framework in which knowledge acquisition is integrated with system specifications to facilitate the verification, validation, and testing of the prototypes as well as the final implementation. To support the framework, we have developed a knowledge acquisition tool, TAME. It provides an integrated environment to acquire and generate specifications about the functionality and behavior of the target system, and the representation of the domain knowledge and domain heuristics. The tool and the framework, together, can thus enhance the verification, validation, and the maintenance of expert systems through their life cycles. (C) 1994 John Wiley & Sons, Inc.
Business Process Modeling Process modeling and workflow applications have become more an more important during last decade. The main reason for this increased interest is the need to provide computer aided system integration of the enterprise based on its business processes. This need requires a technology that enables to integrate modeling, simulation and enactment of processes into one single package. The primary focus of all tools is to describe the way how activities are ordered in time. This kind of partially ordered steps shows how the output of one activity can serve as the input to another one. But there is also another aspect of the business process that has to be involved --where the activities are executed. The spatial aspect of the process enactment represents a new dimension in the process engineering discipline. It is also important to understand that not just process enactment but also the early phases of process specification have to cope with this spatial aspect. The paper is going to discuss how all these above mentioned principles can be integrated together and how the standard approach in process specification might be extended with the spatial dimension to make business process models more natural and understandable.
FIRST: Fractal Indexing and Retrieval SysTem for Image Databases We present an image indexing method and a system to perform content-based retrieval in heterogeneous image databases (IDB). The method is based upon the fractal framework of the iterated function systems (IFS) widely used for image compression. The image index is represented through a vector of numeric features, corresponding to contractive functions (CF) of the IFS framework. The construction of the index vector requires a preliminary processing of the images to select an appropriate set of indexing features (i.e. contractive functions). The latter will be successively used to fill in the vector components, computed as frequencies by which the selected contractive functions appear inside the images. In order to manipulate the index vectors efficiently we use discrete Fourier transform (DFT) to reduce their cardinalities and use a spatial access method (SAM), like R*-tree, to improve search performances. The sound theoretical framework underlying the method enabled us to formally prove some properties of the index. However, for a complete validation of the indexing method, also in terms of effectiveness and efficacy, we performed several experiments on a large collection of images from different domains, which revealed good system performances with a low percentage of false alarms and false dismissals.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.067883
0.0677
0.0677
0.0677
0.022693
0.002667
0.000049
0.000009
0.000002
0
0
0
0
0
Knowledge base applications with software engineering: a tool for requirements specifications The development of proper system specifications provides a foundation for the entire software development process. The effectiveness of the overall system is influenced greatly by the quality of the specifications produced during this phase of the life cycle. Furthermore, this phase acts as a “bridge” between the user and the software development team.A tool is presented for improving the environment in which specifications are constructed. The bridge between the user and software team is enhanced, improving the interaction between these two groups. By enhancing the user's role within the specification processes, the overall quality of the requirements specifications documents can be improved.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Stixel on the Bus: An Efficient Lossless Compression Scheme for Depth Information in Traffic Scenarios The modern automotive industry has to meet the requirement of providing a safer, more comfortable and interactive driving experience. Depth information retrieved from a stereo vision system is one significant resource enabling vehicles to understand their environment. Relying on the stixel, a compact representation of depth information using thin planar rectangles, the problem of processing huge amounts of depth data in real-time can be solved. In this paper, we present an efficient lossless compression scheme for stixels, which further reduces the data volume by a factor of 3.3863. The predictor of the proposed approach is adapted from the LOCO-I (LOw COmplexity LOssless COmpression for Images) algorithm in the JPEG-LS standard. The compressed stixel data could be sent to the in-vehicle communication bus system for future vehicle applications such as autonomous driving and mixed reality systems.
Optimal source codes for geometrically distributed integer alphabets (Corresp.) LetP(i)= (1 - theta)theta^ibe a probability assignment on the set of nonnegative integers wherethetais an arbitrary real number,0 < theta < 1. We show that an optimal binary source code for this probability assignment is constructed as follows. Letlbe the integer satisfyingtheta^l + theta^{l+1} leq 1 < theta^l + theta^{l-1}and represent each nonnegative integeriasi = lj + rwhenj = lfloor i/l rfloor, the integer part ofi/l, andr = [i] mod l. Encodejby a unary code (i.e.,jzeros followed by a single one), and encoderby a Huffman code, using codewords of lengthlfloor log_2 l rfloor, forr < 2^{lfloor log l+1 rfloor} - l, and lengthlfloor log_2 l rfloor + 1otherwise. An optimal code for the nonnegative integers is the concatenation of those two codes.
LOCO-I: a low complexity, context-based, lossless image compression algorithm LOCO-I (low complexity lossless compression for images) is a novel lossless compression algorithm for continuous-tone images which combines the simplicity of Huffman coding with the compression potential of context models, thus “enjoying the best of both worlds.” The algorithm is based on a simple fixed context model, which approaches the capability of the more complex universal context modeling techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with a collection of (context-conditioned) Huffman codes, which is realized with an adaptive, symbol-wise, Golomb-Rice code. LOCO-I attains, in one pass, and without recourse to the higher complexity arithmetic coders, compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. In fact, LOCO-I is being considered by the ISO committee as a replacement for the current lossless standard in low-complexity applications
Run-length encodings (Corresp.) First Page of the Article
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
Programmers use slices when debugging Computer programmers break apart large programs into smaller coherent pieces. Each of these pieces: functions, subroutines, modules, or abstract datatypes, is usually a contiguous piece of program text. The experiment reported here shows that programmers also routinely break programs into one kind of coherent piece which is not coniguous. When debugging unfamiliar programs programmers use program pieces called slices which are sets of statements related by their flow of data. The statements in a slice are not necessarily textually contiguous, but may be scattered through a program.
List processing in real time on a serial computer A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical implementations of list processing systems lack this property because allocating a list cell from the heap may cause a garbage collection, which process requires time proportional to the heap size to finish. A real-time list processing system is presented which continuously reclaims garbage, including directed cycles, while linearizing and compacting the accessible cells into contiguous locations to avoid fragmenting the free storage pool. The program is small and requires no time-sharing interrupts, making it suitable for microcode. Finally, the system requires the same average time, and not more than twice the space, of a classical implementation, and those space requirements can be reduced to approximately classical proportions by compact list representation. Arrays of different sizes, a program stack, and hash linking are simple extensions to our system, and reference counting is found to be inferior for many applications.
A new, fast, and efficient image codec based on set partitioning in hierarchical trees Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code
Duality in specification languages: a lattice-theoretical approach A very general lattice-based language of commands, based on theprimitive operations of substitution and test for equality, isconstructed. This base language permits unbounded nondeterminism,demonic and angelic nondeterminism. A dual language permitting miraclesis constructed. Combining these two languages yields an extended baselanguage which is complete, in the sense that all monotonic predicatetransformers can be constructed in it. The extended base languageprovides a unifying framework for various specification languages; weshow how two Dijkstra-style specification languages can be embedded init.—Authors' Abstract
Abstract Syntax and Semantics of Visual Languages The effective use of visual languages requires a precise understanding of their meaning. Moreover, it is impossible to prove properties of visual languages like soundness of transformation rules or correctness results without having a formal language definition. Although this sounds obvious, it is surprising that only little work has been done about the semantics of visual languages, and even worse, there is no general framework available for the semantics specification of different visual languages. We present such a framework that is based on a rather general notion of abstract visual syntax. This framework allows a logical as well as a denotational approach to visual semantics, and it facilitates the formal reasoning about visual languages and their properties. We illustrate the concepts of the proposed approach by defining abstract syntax and semantics for the visual languages VEX, Show and Tell and Euler circles. We demonstrate the semantics in action by proving a rule for visual reasoning with Euler circles and by showing the correctness of a Show and Tell program.
A Software Development Environment for Improving Productivity First Page of the Article
Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.006061
0.004167
0.00303
0
0
0
0
0
0
0
0
0
0
A Secure Low Complexity Approach for Compression and Transmission of 3-D Medical Images Digital images play an important role in a wide range of medical applications. Several widespread technologies for digital imaging, such as Computed Tomography (CT), Magnetic Resonance (MR), etc., produce three-dimensional images. Data compression, is thus essential to reduce the volume of such images, permitting their efficient storing along with the improvement of the relative transmission time through Internet or any other ad-hoc systems, like Picture Archiving and Communication System (PACS), Tele-radiology, etc.. Since these images are often stored in system particularly vulnerable from the point of view of security, especially because they contain sensitive data, it is necessary to provide such images with a mechanism which ensures at least security against message forgery. In fact, an attack can be made by altering a medical image, and consequently, may alter the relative diagnosis. The purpose of this work is twofold, first we propose a low complexity approach for the compression of 3-D medical images, then, in order to limit the above defined potential attack, we proposed en efficient method to insert within each image an invisible digital watermark, during the compression process. In this way, we define a hybrid approach that handles simultaneously and efficiently both the compression that the security of three-dimensional images. We validate the proposed approach by showing test results.
Detection algorithms for hyperspectral imaging applications We introduce key concepts and issues including the effects of atmospheric propagation upon the data, spectral variability, mixed pixels, and the distinction between classification and detection algorithms. Detection algorithms for full pixel targets are developed using the likelihood ratio approach. Subpixel target detection, which is more challenging due to background interference, is pursued using both statistical and subspace models for the description of spectral variability. Finally, we provide some results which illustrate the performance of some detection algorithms using real hyperspectral imaging (HSI) data. Furthermore, we illustrate the potential deviation of HSI data from normality and point to some distributions that may serve in the development of algorithms with better or more robust performance. We therefore focus on detection algorithms that assume multivariate normal distribution models for HSI data
A novel reversible image data hiding scheme based on pixel value ordering and dynamic pixel block partition Recently, various efficient reversible data-hiding schemes based on pixel value ordering have been proposed for embedding messages into high-fidelity images. In these schemes, after dividing the cover image into equal-sized blocks, the pixels within a given block are ordered according to their values, and data embedding is achieved by modifying the maximum and minimum values of each block. For a given embedding capacity, the optimal block size is exhaustively searched so that the embedding distortion is minimized. These pixel value ordering-based schemes perform fairly well, especially for low embedding capacity. However, to obtain a larger embedding capacity, a smaller block size should be used, which usually leads to a dramatic quality degradation of the marked image. In this paper, to address this drawback and to enhance the performance of pixel value ordering-based embedding further, a novel reversible data hiding method is proposed. Instead of using equal-sized blocks, a dynamic blocking strategy is used to divide the cover image adaptively into various-sized blocks. Specifically, flat image areas are preferentially divided into smaller blocks to retain high embedding capacity, whereas rough areas are divided into larger blocks to avoid decreasing peak signal-to-noise ratio. As a result, the proposed scheme can provide a larger embedding capacity than current pixel value ordering-based schemes while keeping distortion low. The superiority of the proposed method is also experimentally verified.
Visualization, Band Ordering And Compression Of Hyperspectral Images Air-borne and space-borne acquired hyperspectral images are used to recognize objects and to classify materials on the surface of the earth. The state of the art compressor for lossless compression of hyperspectral images is the Spectral oriented Least SQuares (SLSQ) compressor (see [1-7]). In this paper we discuss hyperspectral image compression: we show how to visualize each band of a hyperspectral image and how this visualization suggests that an appropriate band ordering can lead to improvements in the compression process. In particular, we consider two important distance measures for band ordering: Pearson's Correlation and Bhattacharyya distance, and report on experimental results achieved by a Java-based implementation of SLSQ.
Data compression using adaptive coding and partial string matching The recently developed technique of arithmetic coding, in conjunction with a Markov model of the source, is a powerful method of data compression in situations where a linear treatment is inap- propriate. Adaptive coding allows the model to be constructed dy- namically by both encoder and decoder during the course of the transmission, and has been shown to incur a smaller coding overhead than explicit transmission of the model's statistics. But there is a basic conflict between the desire to use high-order Markov models and the need to have them formed quickly as the initial part of the message is sent. This paper describes how the conflict can be resolved with partial string matching, and reports experimental results which show that mixed-case English text can be coded in as little as 2.2 bits/ character with no prior knowledge of the source.
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Feedback stabilization of some event graph models The authors introduce several notions of stability for event graph models, timed or not. The stability is similar to the boundedness notion for Petri nets. The event graph models can be controlled by an output feedback which takes information from some observable transitions and can disable some controllable transitions. The controller itself is composed of an event graph. In this framework the authors solve the corresponding stabilization problems, i.e., they wonder if such a controller may prevent the explosion of the number of tokens
Automated consistency checking of requirements specifications This article describes a formal analysis technique, called consistency checking, for automatic detection of errors, such as type errors, nondeterminism, missing cases, and circular definitions, in requirements specifications. The technique is designed to analyze requirements specifications expressed in the SCR (Software Cost Reduction) tabular notation. As background, the SCR approach to specifying requirements is reviewed. To provide a formal semantics for the SCR notation and a foundation for consistency checking, a formal requirements model is introduced; the model represents a software system as a finite-state automation which produces externally visible outputs in response to changes in monitored environmental quantities. Results of two experiments are presented which evaluated the utility and scalability of our technique for consistency checking in real-world avionics application. The role of consistency checking during the requirements phase of software development is discussed.
Fuzzy identification of systems and its application to modeling and control
Database design with common sense business reasoning and learning Automated database design systems embody knowledge about the database design process. However, their lack of knowledge about the domains for which databases are being developed significantly limits their usefulness. A methodology for acquiring and using general world knowledge about business for database design has been developed and implemented in a system called the Common Sense Business Reasoner, which acquires facts about application domains and organizes them into a a hierarchical, context-dependent knowledge base. This knowledge is used to make intelligent suggestions to a user about the entities, attributes, and relationships to include in a database design. A distance function approach is employed for integrating specific facts, obtained from individual design sessions, into the knowledge base (learning) and for applying the knowledge to subsequent design problems (reasoning).
Executable requirements for embedded systems An approach to requirements specification for embedded systems, based on constructing an executable model of the proposed system interacting with its environment, is proposed. The approach is explained, motivated, and related to data-oriented specification techniques. Portions of a specification language embodying it are introduced, and illustrated with an extended example in which the requirements for a process-control system are developed incrementally.
The Jikes research virtual machine project: building an open-source research community This paper describes the evolution of the Jikes™ Research Virtual Machine project from an IBM internal research project, called Jalapeño, into an open-source project. After summarizing the original goals of the project, we discuss the motivation for releasing it as an open-source project and the activities performed to ensure the success of the project. Throughout, we highlight the unique challenges of developing and maintaining an open-source project designed specifically to support a research community.
Verifying task-based specifications in conceptual graphs A conceptual model is a model of real world concepts and application domains as perceived by users and developers. It helps developers investigate and represent the semantics of the problem domain, as well as communicate among themselves and with users. In this paper, we propose the use of task-based specifications in conceptual graphs (TBCG) to construct and verify a conceptual model. Task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in the conceptual model; whereas conceptual graphs are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of verification. Verifying a conceptual model is performed on model specifications of a task through constraints satisfaction and relaxation techniques, and on process specifications of the task based on operators and rules of inference inherited in conceptual graphs.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.1
0.05
0.05
0.014286
0.005263
0
0
0
0
0
0
0
0
0
A Tool For Task-Based Knowledge And Specification Acquisition Knowledge acquisition has been identified as the bottleneck for knowledge engineering. One of the reasons is the lack of an integrated methodology that is able to provide tools and guidelines for the elicitation of knowledge as well as the verification and validation of the system developed. Even though methods that address this issue have been proposed, they only loosely relate knowledge acquisition to the remaining part of the software development fife cycle. To alleviate this problem, we have developed a framework in which knowledge acquisition is integrated with system specifications to facilitate the verification, validation, and testing of the prototypes as well as the final implementation. To support the framework, we have developed a knowledge acquisition tool, TAME. It provides an integrated environment to acquire and generate specifications about the functionality and behavior of the target system, and the representation of the domain knowledge and domain heuristics. The tool and the framework, together, can thus enhance the verification, validation, and the maintenance of expert systems through their life cycles. (C) 1994 John Wiley & Sons, Inc.
The role of knowledge in software development Software development is knowledge-intensive. Many concepts have been developed to ease or guide the processing of knowledge in software development, including information hiding, modularity, objects, functions and procedures, patterns, and more. These concepts are supported by various methods, approaches, and tools using symbols, graphics, and languages. Some are formal; others are semiformal or simply made up of key practices. Methods and approaches in software engineering are often based on the results of empirical observations or on individual success stories.
Organizing usability work to fit the full product range
Knowledge Representation And Reasoning In Software Engineering It has been widely recognized that in order to solve difficult problems using computers one will usually have to use a great deal of knowledge (often domain specific), rather than a few general principles. The intent of this special issue was to study how this attitude has affected research on tools for improved software productivity and quality. Many such tools and problems related to them were discussed at a Workshop on the Development of Intelligent and Cooperative Information Systems, held in Niagara-on-the-Lake in April 1991, from which the idea for this issue originated.
Knowledge-based and statistical approaches to text retrieval Major research issues in information retrieval are reviewed, and developments in knowledge-based approaches are described. It is argued that although a fair amount of work has been done, the effectiveness of this approach has yet to be demonstrated. It is suggested that statistical techniques and knowledge-based approaches should be viewed as complementary, rather than competitive.<>
A mapping system from Object-Z to C++ Object-Z is an extension of the formal specification language Z, augmenting the class concept as a structuring facility. The paper introduces and discusses a structural mapping system from Object-Z to the programming language C++, and reports on its implementation on Unix. The structural mapping translates an Object-Z specification consisting of classes into class interfaces of C++ such as data members and prototypes of member functions. Thus it is not intended as a code generation system, but rather as a tool for analyzing specification (including syntax and type checking) and for aiding a software developer in obtaining code. Through the implementation of the mapping system several language features of Object-Z and C++ concerning object-orientation are clarified
Task-Based Specifications Through Conceptual Graphs Combining conceptual graphs with the task-based specification method to specify software requirements helps capture richer semantics, and integrates requirements specifications tightly and uniformly.Conceptual modeling is an important step toward the construction of user requirements. Requirements engineering is knowledge-intensive and cannot be dealt with using only a few general principles. Therefore, a conceptual model is domain-oriented and should represent the richer semantics of the problem domain. The conceptual model also helps designers communicate among themselves and with users.To capture and represent a conceptual model for the problem domain, we needmechanisms to structure the knowledge of the problem domain at the conceptual level, which has the underlying principles of abstraction and encapsulation; and formalisms to represent the semantics of the problem domain and to provide a reasoning capability for verification and validation.We propose the task-based specification methodology as the mechanism to structure the knowledge captured in conceptual models. TBSM offers four main benefits for constructing conceptual models: First, incorporating the task structure provides a detailed functional-decomposition technique for organizing and refining functional and behavioral specifications.Second, the distinction between soft and rigid conditions lets us specify conflicting functional requirements.Third, with TBSM, not only can we document the expected control flow and module interactions, but we can also verify that the behavioral specification is consistent with the system's functional specification.Fourth, the state model makes it easier to describe complex state conditions. Terminology defined in the state model can easily be reused for specifying the functionality of different tasks. Without such a state model, describing the state conditions before and after a functional unit of an expert system is too cumbersome to be practical.We propose conceptual graphs as the formalism to express task-based specifications where the task structure of problem-solving knowledge drives the specification, the pieces of the specification can be iteratively refined, and verification can be performed for a single layer or between layers. We chose conceptual graphs for their expressive power to represent both declarative and procedural knowledge, and for their assimilation capability--that is, their ability to be combined.
Are knowledge representations the answer to requirement analysis? A clear distinction between a requirement and a specification is crucial to an understanding of how and why knowledge representation techniques can be useful for the requirement stage. A useful distinction is to divide the requirement analysis phase into a problem specification and system specification phases. It is argued that it is necessary first to understand what kind of knowledge is in the requirement analysis process before worrying about representational schemes
Determining an organization's information requirements: a state of the art survey The issue of information requirements of an organization and their specifications span two isolated territories. One territory is that of organization and management and the other belongs to technicians. There is a considerable gap between these two territories. Research in requirements engineering (technician's side) has primarily concentrated on designing and developing formal languages to document and analyze user requirements, once they have been determined. This research has ignored the organizational issues involved in information requirements determination. Research in the field of organization and management has addressed the organizational issues which affect information requirements of an organization. Various frameworks reported in the literature provide insights, but they cannot be considered as methods of determining requirements. Little work has been done on the process of determining requirements. This process must start with the understanding of an organization and end with a formal specification of information requirements. Here, it is worth emphasizing the fact that the process of determining and specifying information requirements of an organization is very different from the process of specifying design requirements of an information system. Therefore, program design methodologies, which are helpful in designing a system are not suitable for the process of determining and specifying information requirements of an organization.This paper discusses the state of the art in information requirements determination methodologies. Excluded are those methodologies which emphasize system design and have little to offer for requirements determination of an organization.
Static Analysis to Identify Invariants in RSML Specifications . Static analysis of formal, high-level specifications of safetycritical software can discover flaws in the specification that would escapeconventional syntactic and semantic analysis. As an example, specificationswritten in the Requirements State Machine Language (RSML)should be checked for consistency : two transitions out of the same statethat are triggered by the same event should have mutually exclusiveguarding conditions. The check uses only behavioral information that islocal to...
The external structure: Experience with an automated module interconnection language To study the problems of modifiable software, the Software Technology project has investigated approaches and methodologies that could improve modifiability. To test our approaches tools based on data abstraction-a design and programming language and a module interconnection language-were built and used. The incorporation of the module interconnection language into design altered the traditional model of system building. Introducing novices to our approach led to the formalization of new models of program design, development, and evaluation.
Matrix factorizations for reversible integer mapping Reversible integer mapping is essential for lossless source coding by transformation. A general matrix factorization theory for reversible integer mapping of invertible linear transforms is developed. Concepts of the integer factor and the elementary reversible matrix (ERM) for integer mapping are introduced, and two forms of ERM-triangular ERM (TERM) and single-row ERM (SERM)-are studied. We prove that there exist some approaches to factorize a matrix into TERMs or SERMs if the transform is invertible and in a finite-dimensional space. The advantages of the integer implementations of an invertible linear transform are (i) mapping integers to integers, (ii) perfect reconstruction, and (iii) in-place calculation. We find that besides a possible permutation matrix, the TERM factorization of an N-by-N nonsingular matrix has at most three TERMs, and its SERM factorization has at most N+1 SERMs. The elementary structure of ERM transforms is the ladder structure. An executable factorization algorithm is also presented. Then, the computational complexity is compared, and some optimization approaches are proposed. The error bounds of the integer implementations are estimated as well. Finally, three ERM factorization examples of DFT, DCT, and DWT are given
The software knowledge base We describe a system for maintaining useful information about a software project. The “software knowledge base” keeps track of software components and their properties; these properties are described through binary relations and the constraints that these relations must satisfy. The relations and constraints are entirely user-definable, although a set of predefined libraries of relations with associated constraints is provided for some of the most important aspects of software development (specification, design, implementation, testing, project management).The use of the binary relational model for describing the properties of software is backed by a theoretical study of the relations and constraints which play an important role in software development.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.053644
0.056735
0.056735
0.056735
0.041129
0.028368
0.009621
0.000461
0.000022
0
0
0
0
0
Efficient distributed state estimation of hidden Markov Models over unreliable networks This paper presents a new recursive Hybrid consensus filter for distributed state estimation on a Hidden Markov Model (HMM), which is well suited to multirobot applications and settings. The proposed algorithm is scalable, robust to network failure and capable of handling non-Gaussian transition and observation models and is, therefore, quite general. No global knowledge of the communication network is assumed. Iterative Conservative Fusion (ICF) is used to reach consensus over potentially correlated priors, while consensus over likelihoods is handled using weights based on a Metropolis Hastings Markov Chain (MHMC). The proposed method is evaluated in a multi-agent tracking problem and a high-dimensional HMM and it is shown that its performance surpasses the competing algorithms.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Context based lossless coder based on RLS predictor adaption scheme In the paper highly efficient context image lossless coder of moderate complexity is presented. Three main plus few auxiliary contexts are described. Predictors are adaptive, enhanced RLS coefficient update formula is implemented. A stage of NLMS prediction is added. Prediction error bias is removed using a robust multi-source approach. An advanced adaptive context arithmetic coder is applied. Experimental results show that indeed, the new coder is both more effective and faster than other state-of-the-art algorithms.
Performance optimized predictor blending technique for lossless image coding The paper presents a lossless coding method based on predictor blending approach that codes nine widely used benchmark images with the lowest average bitrate ever published. At the same time the algorithm is more time efficient than its most important competitors: TMWLEGO, MRP 0.5, and Multi-WLS. A set of 20 blended predictors is applied, including adaptive RLS/OLS ones, ALCM, CoBALP, and a texture context mapping one. A sophisticated error bias removal method is applied to outputs from few of them. Prediction error is coded using an advanced adaptive context arithmetic coder.
Context-based, adaptive, lossless image coding We propose a context-based, adaptive, lossless image codec (CALIC). The codec obtains higher lossless compression of continuous-tone images than other lossless image coding techniques in the literature. This high coding efficiency is accomplished with relatively low time and space complexities. The CALIC puts heavy emphasis on image data modeling. A unique feature of the CALIC is the use of a large number of modeling contexts (states) to condition a nonlinear predictor and adapt the predictor to varying source statistics. The nonlinear predictor can correct itself via an error feedback mechanism by learning from its mistakes under a given context in the past. In this learning process, the CALIC estimates only the expectation of prediction errors conditioned on a large number of different contexts rather than estimating a large number of conditional error probabilities. The former estimation technique can afford a large number of modeling contexts without suffering from the context dilution problem of insufficient counting statistics as in the latter approach, nor from excessive memory use. The low time and space complexities are also attributed to efficient techniques for forming and quantizing modeling contexts
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:[email protected]), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled...
2009 Data Compression Conference (DCC 2009), 16-18 March 2009, Snowbird, UT, USA
Voice as sound: using non-verbal voice input for interactive control We describe the use of non-verbal features in voice for direct control of interactive applications. Traditional speech recognition interfaces are based on an indirect, conversational model. First the user gives a direction and then the system performs certain operation. Our goal is to achieve more direct, immediate interaction like using a button or joystick by using lower-level features of voice such as pitch and volume. We are developing several prototype interaction techniques based on this idea, such as "control by continuous voice", "rate-based parameter control by pitch," and "discrete parameter control by tonguing." We have implemented several prototype systems, and they suggest that voice-as-sound techniques can enhance traditional voice recognition approach.
An ontological model of an information system An ontological model of an information system that provides precise definitions of fundamental concepts like system, subsystem, and coupling is proposed. This model is used to analyze some static and dynamic properties of an information system and to examine the question of what constitutes a good decomposition of an information system. Some of the major types of information system formalisms that bear on the authors' goals and their respective strengths and weaknesses relative to the model are briefly reviewed. Also articulated are some of the fundamental notions that underlie the model. Those basic notions are then used to examine the nature and some dynamics of system decomposition. The model's predictive power is discussed.
DOODLE: a visual language for object-oriented databases In this paper we introduce DOODLE, a new visual and declarative language for object-oriented databases. The main principle behind the language is that it is possible to display and query the database with arbitrary pictures. We allow the user to tailor the display of the data to suit the application at hand or her preferences. We want the user-defined visualizations to be stored in the database, and the language to express all kinds of visual manipulations. For extendibility reasons, the language is object-oriented. The semantics of the language is given by a well-known deductive query language for object-oriented databases. We hope that the formal basis of our language will contribute to the theoretical study of database visualizations and visual query languages, a subject that we believe is of great interest, but largely left unexplored.
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.2
0.000791
0
0
0
0
0
0
0
0
0
0
0
On the Aesthetics of Diagrams Given the recent move towards visual languages in real-world system specification and design, the need for algorithmic procedures that produce clear and eye-pleasing layouts of complex diagrammatic entities arises in full force. This talk addresses a modest, yet still very difficult version of the problem, in which the diagrams are merely general undirected graphs with straight-line edges.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Combining angels, demons and miracles in program specifications The complete lattice of monotonic predicate transformers is interpreted as a command language with a weakest precondition semantics. This command lattice contains Dijkstra's guarded commands as well as miracles. It also permits unbounded nondeterminism and angelic nondeterminism. The language is divided into sublanguages using criteria of demonic and angelic nondeterminism, termination and absence of miracles. We investigate dualities between the sublanguages and how they can be generated from simple primitive commands. The notions of total correctness and refinement are generalized to the command lattice.
Unifying correctness statements Partial, total and general correctness and further models of sequential computations differ in their treatment of finite, infinite and aborting executions. Algebras structure this diversity of models to avoid the repeated development of similar theories and to clarify their range of application. We introduce algebras that uniformly describe correctness statements, correctness calculi, pre-post specifications and loop refinement rules in five kinds of computation models. This extends previous work that unifies iteration, recursion and program transformations for some of these models. Our new description includes a relativised domain operation, which ignores parts of a computation, and represents bound functions for claims of termination by sequences of tests. We verify all results in Isabelle heavily using its automated theorem provers.
Monotone predicate transformers as up-closed multirelations In the study of semantic models for computations two independent views predominate: relational models and predicate transformer semantics. Recently the traditional relational view of computations as binary relations between states has been generalised to multirelations between states and properties allowing the simultaneous treatment of angelic and demonic nondeterminism. In this paper the two-level nature of multirelations is exploited to provide a factorisation of up-closed multirelations which clarifies exactly how multirelations model nondeterminism. Moreover, monotone predicate transformers are, in the precise sense of duality, up-closed multirelations. As such they are shown to provide a notion of effectivity of a specification for achieving a given postcondition.
Abstracto 84: The next generation Programming languages are not an ideal vehicle for expressing algorithms. This paper sketches how a language Abstracto might be developed for “algorithmic expressions” that may be manipulated by the rules of “algorithmics”, quite similar to the manipulation of mathematical expressions in mathematics. Two examples are given of “abstract” algorithmic expressions that are not executable in the ordinary sense, but may be used in the derivation of programs. It appears that the notion of “refinement” may be replaced by a weaker notion for abstract algorithmic expressions, corresponding also to a weaker notion of “weakest precondition”.
Engineering and theoretical underpinnings of retrenchment Refinement is reviewed, highlighting in particular the distinction between its use as a specification constructor at a high level, and its use as an implementation mechanism at a low level. Some of its shortcomings as a specification constructor at high levels of abstraction are pointed out, and these are used to motivate the adoption of retrenchment for certain high level development steps. Basic properties of retrenchment are described, including a justification of the operation proof obligation, simple examples, its use in requirements engineering and model evolution, and simulation properties. The interaction of retrenchment with refinement notions of correctness is overviewed, as is a range of other technical issues. Two case study scenarios are presented. One is a simple digital redesign control theory problem, and the other is an overview of the application of retrenchment to the Mondex Purse development.
A Single Complete Rule for Data Refinement One module is said to be refined by a second if no program using the second module can detect that it is not using the first; in that case the second module can replace the first in any program. Data refinement transforms the interior pieces of a module — its state and consequentially its operations — in order to refine the module overall.
Designs with Angelic Nondeterminism Hoare and He's Unifying Theories of Programming (UTP) are a predicative relational framework for the definition and combination of refinement languages for a variety of programming paradigms. Previous work has defined a theory for angelic nondeterminism in the UTP; this is basically an encoding of binary multirelations in a predicative model. In the UTP a theory of designs (pre and postcondition pairs) provides, not only a model of terminating programs, but also a stepping stone to define a theory for state-rich reactive processes. In this paper, we cast the angelic nondeterminism theory of the UTP as a theory of designs with the long-term objective of providing a model for well established refinement process algebras like Communicating Sequential Processes (CSP) and Circus.
A Weaker Precondition for Loops
HOL-Boogie -- An Interactive Prover for the Boogie Program-Verifier Boogieis a program verification condition generator for an imperative core language. It has front-ends for the programming languages C# and C enriched by annotations in first-order logic.Its verification conditions -- constructed via a wpcalculus from these annotations -- are usually transferred to automated theorem provers such as Simplifyor Z3. In this paper, however, we present a proof-environment, HOL-BoogieP, that combines Boogie with the interactive theorem prover Isabelle/HOL. In particular, we present specific techniques combining automated and interactive proof methods for code-verification.We will exploit our proof-environment in two ways: First, we present scenarios to "debug" annotations (in particular: invariants) by interactive proofs. Second, we use our environment also to verify "background theories", i.e. theories for data-types used in annotations as well as memory and machine models underlying the verification method for C.
Quantitative evaluation of software quality The study reported in this paper establishes a conceptual framework and some key initial results in the analysis of the characteristics of software quality. Its main results and conclusions are: • Explicit attention to characteristics of software quality can lead to significant savings in software life-cycle costs. • The current software state-of-the-art imposes specific limitations on our ability to automatically and quantitatively evaluate the quality of software. • A definitive hierarchy of well-defined, well-differentiated characteristics of software quality is developed. Its higher-level structure reflects the actual uses to which software quality evaluation would be put; its lower-level characteristics are closely correlated with actual software metric evaluations which can be performed. • A large number of software quality-evaluation metrics have been defined, classified, and evaluated with respect to their potential benefits, quantifiability, and ease of automation. •Particular software life-cycle activities have been identified which have significant leverage on software quality. Most importantly, we believe that the study reported in this paper provides for the first time a clear, well-defined framework for assessing the often slippery issues associated with software quality, via the consistent and mutually supportive sets of definitions, distinctions, guidelines, and experiences cited. This framework is certainly not complete, but it has been brought to a point sufficient to serve as a viable basis for future refinements and extensions.
ConceptBase—a deductive object base for meta data management Deductive object bases attempt to combine the advantages of deductive relational databases with those of object-oriented databases. We review modeling and implementation issues encountered during the development of ConceptBase, a prototype deductive object manager supporting the Telos object model. Significant features include: 1) The symmetric treatment of object-oriented, logic-oriented and graph-oriented perspectives, 2) an infinite metaclass hierarchy as a prerequisite for extensibility and schema evolution, 3) a simple yet powerful formal semantics used as the basis for implementation, 4) a client-server architecture supporting collaborative work in a wide-area setting. Several application experiences demonstrate the value of the approach especially in the field of meta data management.
Analogical Reuse of Requirements Specifications: A Computational Model Specifications of requirements for new software systems can be revised, refined, or completed in reference to specifications of requirements for existing similar systems. Although realized as a form of analogical problem solving, specification by reuse is not adequately supported by available computational models for detecting analogies. This is chiefly due to the following reasons: (1) It is assumed that specifications are expressed according to the same specification model and in a uniform representation scheme. (2) Additional information is needed for the detection of analogies, which is not contained in the specifications. (3) Performance scales poorly with the complexity of specifications. This article presents a computational model for detecting analogies, which addresses these issues to a certain extent. The application of the model in the specification of requirements by analogical reuse is demonstrated through an example, and its sensitivity to the representation of specifications is discussed. Finally, the results of a preliminary empirical evaluation of the model are reported.
The specification logic nuZ This paper introduces a wide-spectrum specification logic nu Z. The minimal core logic is extended to a more expressive specification logic which includes a schema calculus similar (but not equivalent) to Z, new additional schema operators, and extensions to programming and program development logics.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.022039
0.016907
0.016907
0.007754
0.006667
0.003408
0.0016
0.000249
0.000011
0
0
0
0
0
Robust passivity analysis of neural networks with discrete and distributed delays. This paper focuses on the problem of passivity of neural networks in the presence of discrete and distributed delay. By constructing an augmented Lyapunov functional and combining a new integral inequality with the reciprocally convex approach to estimate the derivative of the Lyapunov–Krasovskii functional, sufficient conditions are established to ensure the passivity of the considered neural networks, in which some useful information on the neuron activation function ignored in the existing literature is taken into account. Three numerical examples are provided to demonstrate the effectiveness and the merits of the proposed method.
Dissipativity analysis of neural networks with time-varying delays This paper focuses on the problem of delay-dependent dissipativity analysis for a class of neural networks with time-varying delays. A free-matrix-based inequality method is developed by introducing a set of slack variables, which can be optimized via existing convex optimization algorithms. Then, by employing Lyapunov functional approach, sufficient conditions are derived to guarantee that the considered neural networks are strictly ( Q , S , R ) -γ-dissipative. The conditions are presented in terms of linear matrix inequalities and can be readily checked and solved. Numerical examples are finally provided to demonstrate the effectiveness and advantages of the proposed new design techniques.
Relaxed dissipativity criteria for memristive neural networks with leakage and time-varying delays. In this paper, the problem of strict (Q,S,R)-γ-dissipativity analysis for memristive neural networks (MNNs) with leakage and time-varying delays is studied. By applying nonsmooth analysis, MNNs are converted into the conventional neural networks (NNs). Based on the construction of a novel Lyapunov–Krasovskii functional (LKF), the relaxed dissipativity criteria are obtained by combining Wirtinger-based integral inequality with free-weighting matrices technique. This superior proposed criteria do not really require all the symmetric matrices involved in the employed quadratic to be positive definite. Moreover, the derived criteria are less conservative. Finally, two numerical examples are given to show the effectiveness and less conservatism of the proposed criteria.
Sampled-Data Synchronization Analysis of Markovian Neural Networks With Generally Incomplete Transition Rates This paper investigates the problem of sampled-data synchronization for Markovian neural networks with generally incomplete transition rates. Different from traditional Markovian neural networks, each transition rate can be completely unknown or only its estimate value is known in this paper. Compared with most of existing Markovian neural networks, our model is more practical because the transition rates in Markovian processes are difficult to precisely acquire due to the limitations of equipment and the influence of uncertain factors. In addition, the time-dependent Lyapunov-Krasovskii functional is proposed to synchronize drive system and response system. By applying an extended Jensen's integral inequality and Wirtinger's inequality, new delay-dependent synchronization criteria are obtained, which fully utilize the upper bound of variable sampling interval and the sawtooth structure information of varying input delay. Moreover, the desired sampled-data controllers are obtained. Finally, two examples are provided to illustrate the effectiveness of the proposed method.
Further improved results on stability and dissipativity analysis of static impulsive neural networks with interval time-varying delays. This paper deals with the problem of stability and dissipativity analysis for a class of static neural networks (SNNs) with interval time-varying delays. The system under study involves impulsive effects and time delays, which are often encountered in practice and are the sources of instability. Our attention is focused on the an analysis of whether the system is asymptotically stable and strictly (Q,S,R)−γ- dissipative. Based on the Wirtinger-based single and double integral inequality technique combined with the free-weighting-matrix approach which is expressed in terms of linear matrix inequalities (LMIs), we propose an improved delay-dependent stability and dissipativity criterion to guarantee the system to be admissible. Based on this criterion, a new sufficient delay and γ-dependent condition is given to guarantee that the SNNs with interval time-varying delays are strictly (Q,S,R)−γ- dissipative. Finally, the results developed in this paper can tolerate larger allowable delay bounds than the existing ones in the recent literature, which is demonstrated by several interesting examples.
Robust delay-depent stability criteria for uncertain neural networks with two additive time-varying delay components. This paper considers the problem of robust stability of uncertain neural networks with two additive time varying delay components. The activation functions are monotone nondecreasing with known lower and upper bounds. By constructing of a modified augmented Lyapunov function, some new stability criteria are established in terms of linear matrix inequalities, which is easily solved by various convex optimization techniques. Compared with the existing works, the obtained criteria are less conservative due to reciprocal convex technique and an improved inequality, which provides more accurate upper bound than Jensen inequality for dealing with the cross-term. Finally, two numerical examples are given to illustrate the effectiveness of the proposed method.
A survey of linear matrix inequality techniques in stability analysis of delay systems Recent years have witnessed a resurgence of research interests in analysing the stability of time-delay systems. Many results have been reported using a variety of approaches and techniques. However, much of the focus has been laid on the use of the Lyapunov-Krasovskii theory to derive sufficient stability conditions in the form of linear matrix inequalities. The purpose of this article is to survey the recent results developed to analyse the asymptotic stability of time-delay systems. Both delay-independent and delay-dependent results are reported in the article. Special emphases are given to the issues of conservatism of the results and computational complexity. Connections of certain delay-dependent stability results are also discussed.
Stability of Recurrent Neural Networks With Time-Varying Delay via Flexible Terminal Method. This brief is concerned with the stability criteria for recurrent neural networks with time-varying delay. First, based on convex combination technique, a delay interval with fixed terminals is changed into the one with flexible terminals, which is called flexible terminal method (FTM). Second, based on the FTM, a novel Lyapunov-Krasovskii functional is constructed, in which the integral interval ...
Robust passivity analysis for neutral-type neural networks with mixed and leakage delays. This paper investigates the problem of passivity of neutral-type neural networks with mixed and leakage delays. By establishing a suitable augmented Lyapunov functional and combining a new integral inequality with the reciprocally convex combination technique, we obtain some sufficient passivity conditions, which are formulated in terms of linear matrix inequalities (LMIs). Here, some useful information on the neuron activation function ignored in the existing literature is taken into account. Finally, some numerical examples are given to demonstrate the effectiveness of the proposed method.
Distributed Moving Horizon Estimation for Linear Constrained Systems This paper presents a novel distributed estimation algorithm based on the concept of moving horizon estimation. Under weak observability conditions we prove convergence of the state estimates computed by any sensors to the correct state even when constraints on noise and state variables are taken into account in the estimation process. Simulation examples are provided in order to show the main features of the proposed method.
Parallel Programming in Linda
An approach to conceptual feedback in multiple viewed software requirements modeling This paper outlinespart of an approach to these multiple-viewed requirements thatprovides some structure for integrating and validating multipleviews.Most recent research has acknowledged the presence of multipleviews, but only a few have explicitly modeled them as distinctviews. The work of Nissen, et al [Nissen96] is an example of apractical technique that is used in commercial settings to form aframework for discussion and negotiation among participants. Itsbiggest drawbacks are (a) ...
A formal approach to program modification This paper presents a systematic approach to implementing certain kinds of program modifications, in which (conceptually at least) the modification is implemented as a separate program and then integrated with the original program using semantically based transformations. This approach allows us to ensure that the required modification is implemented correctly and also allows us to explore different ways of implementing a given modification. The approach is illustrated informally using an example where the modification can be implemented in two distinct ways, and then formalised within the refinement calculus by defining a program conjunction operator whose properties justify the transformations required in the example.
Robust Delay-Dependent Stability Criteria for Time-Varying Delayed Lur'e Systems of Neutral Type This paper deals with the problem of the robust delay-dependent stability of uncertain Lur'e systems with neutral-type time-varying delays. By constructing a set of Lyapunov---Krasovskii functional, less conservative robust stability criteria are derived in terms of linear matrix inequalities. The contribution in reduced conservation of the proposed stability criteria relies on the reciprocally convex method and Wirtinger inequality, which provides tighter upper bound than Jensen inequality. Three numerical examples are provided to show the effectiveness of the proposed method.
1.015685
0.008495
0.007143
0.004764
0.003702
0.00277
0.00095
0.000216
0.000036
0
0
0
0
0
Adaptive sliding mode fault-tolerant control for type-2 fuzzy systems with distributed delays. In this paper, the problem of sliding-mode fault-tolerant control is addressed for a class of uncertain nonlinear systems with distributed delays and parameter perturbations. By using interval type-2 Takagi–Sugeno (T–S) fuzzy models, the nonlinear systems are formulated , of which uncertain parameters and distributed state delays are represented in a unified type-2 fuzzy framework. In order to tackle with the uncertain parameters in pre-designed membership functions, an adaptive mechanism is utilized to manage the time-varying weightings corresponding to the upper membership functions. A simple linear sliding surface subject to several solvable matrix inequalities is designed by using a reduced-order system. To guarantee the stability of the overall dynamic system, an adaptive sliding mode controller is designed, which can compensate for both uncertainties and distributed delays. Finally, a truck-trailer model system is used in simulations to verify the applicability and effectiveness of the control and estimation schemes.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Towards Symbolic Model-Based Mutation Testing: Combining Reachability And Refinement Checking Model-based mutation testing uses altered test models to derive test cases that are able to reveal whether a modelled fault has been implemented. This requires conformance checking between the original and the mutated model. This paper presents an approach for symbolic conformance checking of action systems, which are well-suited to specify reactive systems. We also consider non-determinism in our models. Hence, we do not check for equivalence, but for refinement. We encode the transition relation as well as the conformance relation as a constraint satisfaction problem and use a constraint solver in our reachability and refinement checking algorithms. Explicit conformance checking techniques often face state space explosion. First experimental evaluations show that our approach has potential to outperform explicit conformance checkers.
Towards Symbolic Model-Based Mutation Testing: Pitfalls in Expressing Semantics as Constraints Model-based mutation testing uses altered models to generate test cases that are able to detect whether a certain fault has been implemented in the system under test. For this purpose, we need to check for conformance between the original and the mutated model. We have developed an approach for conformance checking of action systems using constraints. Action systems are well-suited to specify reactive systems and may involve non-determinism. Expressing their semantics as constraints for the purpose of conformance checking is not totally straight forward. This paper presents some pitfalls that hinder the way to a sound encoding of semantics into constraint satisfaction problems and gives solutions for each problem.
Efficient Refinement Checking for Model-Based Mutation Testing In model-based mutation testing, a test model is mutated for test case generation. The resulting test cases are able to detect whether the faults in the mutated models have been implemented in the system under test. For this purpose, a conformance check between the original and the mutated model is required. We have developed an approach for conformance checking of action systems, which are well-suited to specify reactive and non-deterministic systems. We rely on constraint solving techniques. Both, the conformance relation and the transition relation are encoded as constraint satisfaction problems. Earlier results showed the potential of our constraint-based approach to outperform explicit conformance checking techniques, which often face state space explosion. In this work, we go one step further and show optimisations that really boost our performance. In our experiments, we could reduce our runtimes by 80%.
Efficient Mutation Killers in Action This paper presents the techniques and results of a novel model-based test case generation approach that automatically derives test cases from UML state machines. Mutation testing is applied on the modeling level to generate test cases. We present the test case generation approach, discuss the tool chain, and present the properties of the generated test cases. The main contribution of this paper is an empirical study of a car alarm system where different strategies for killing mutants are compared. We present detailed figures on the effectiveness of the test case generation technique. Although UML serves as an input language, all techniques are grounded on solid foundations: we give UML state transition diagrams a formal semantics by mapping them to Back's action systems.
Model-Based Mutation Testing of an Industrial Measurement Device.
Model-based mutation testing via symbolic refinement checking. In model-based mutation testing, a test model is mutated for test case generation. The resulting test cases are able to detect whether the faults in the mutated models have been implemented in the system under test. For this purpose, a conformance check between the original and the mutated model is required. The generated counterexamples serve as basis for the test cases. Unfortunately, conformance checking is a hard problem and requires sophisticated verification techniques. Previous attempts using an explicit conformance checker suffered state space explosion. In this paper, we present several optimisations of a symbolic conformance checker using constraint solving techniques. The tool efficiently checks the refinement between non-deterministic test models. Compared to previous implementations, we could reduce our runtimes by 97%. In a new industrial case study, our optimisations can reduce the runtime from over 6 hours to less than 3 minutes.
A distributed algorithm to implement n-party rendezvous The concept of n-party rendezvous has been proposed to implement synchronous communication among an arbitrary number of concurrent, asynchronous processes. The problem of implementing n-party rendezvous captures two central issues in the design of distributed systems: exclusion and synchronization. This paper describes a simple, distributed algorithm, referred to as the event manager algorithm, to implement n-party rendezvous. It also compares the performance of this algorithm with an existing algorithm for this problem.
Action Systems with Continuous Behaviour An action system framework is a predicate transformer based method for modelling and analysing distributed and reactive systems. The actions are statements in Dijkstra's guarded command language, and their semantics is given by predicate transformers. We extend conventional action systems with a differential action consisting of a differential equation and an evolution guard. The semantics is given by a weakest liberal precondition transformer, because it is not always desirable that differential actions terminate. It is shown that the proposed differential action has a semantics which corresponds to a discrete approximation when the discrete step size goes to zero. The extension gives action systems the power to model real-time clocks and continuous evolutions within hybrid systems. In this paper we give a standard form for such a hybrid action system. We also extend parallel composition to hybrid action systems. This does not change the original meaning of the parallel composition, and therefore ordinary action systems compose in parallel with hybrid action systems.
A validation system for object oriented specifications of information systems In this paper, we present a set of software tools for developing and validating object oriented conceptual models specified in TROLL. TROLL is a formal object-oriented language for modelling information systems on a high level of abstraction. The tools include editors, syntax and consistency checkers as well as an animator which generates executable prototypes from the models on the same level of abstraction. In this way, the model behaviour can be observed and checked against the informal user requirements. After a short introduction in some validation techniques and research questions, we describe briefly the TROLL language as well as its graphical version OMTROLL. We then explain the system architecture and show its functionalities by a simplified example of an industrial application which is called CATC (ComputerAided Testing and Certifying).
Object-oriented development in an industrial environment Object-oriented programming is a promising approach to the industrialization of the software development process. However, it has not yet been incorporated in a development method for large systems. The approaches taken are merely extensions of well-known techniques when 'programming in the small' and do not stand on the firm experience of existing developments methods for large systems. One such technique called block design has been used within the telecommunication industry and relies on a similar paradigm as object-oriented programming. The two techniques together with a third technique, conceptual modeling used for requirement modeling of information systems, have been unified into a method for the development of large systems.
Financial Privacy Policies and the Need for Standardization By analyzing 40 online privacy policy documents from nine financial institutions, the authors examine the clarity and readability of these important privacy notices. Using goal-driven requirements engineering techniques and readability analysis, the findings show that compliance with the ex-isting legislation and standards is, at best, questionable.
Analyzing Regulatory Rules for Privacy and Security Requirements Information practices that use personal, financial and health-related information are governed by U.S. laws and regulations to prevent unauthorized use and disclosure. To ensure compliance under the law, the security and privacy requirements of relevant software systems must be properly aligned with these regulations. However, these regulations describe stakeholder rules, called rights and obligations, in complex and sometimes ambiguous legal language. These "rules" are often precursors to software requirements that must undergo considerable refinement and analysis before they are implementable. To support the software engineering effort to derive security requirements from regulations, we present a methodology to extract access rights and obligations directly from regulation texts. The methodology provides statement-level coverage for an entire regulatory document to consistently identify and infer six types of data access constraints, handle complex cross-references, resolve ambiguities, and assign required priorities between access rights and obligations to avoid unlawful information disclosures. We present results from applying this methodology to the entire regulation text of the U.S. Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule.
Processing Negation in NL Interfaces to Knowledge Bases This paper deals with Natural Language (NL) question-answering to knowledge bases (KB). It considers the usual conceptual graphs (CG) approach for NL semantic interpretation by joins of canonical graphs and compares it to the computational linguistics approach for NL question-answering basedon logical forms. After these theoretical considerations, the paper presents a system for querying a KB of CG in the domain of finances. It uses controlled English and processes large classes of negative questions. Internally the negation is interpreted as a replacement of the negatedt ype by its siblings from the type hierarchy. The answer is found by KB projection, generalized and presented in NL in a rather summarizedform, without a detaileden umeration of types. Thus the paper presents an interface for NL understanding and original techniques for application of CG operations (projection and generalization) as means for obtaining a more "natural" answer to the user's negative questions.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.031973
0.031796
0.02624
0.02417
0.008007
0.001465
0.000065
0.000008
0
0
0
0
0
0
Developing formal object-oriented requirements specifications: a model, tool and technique The creation of a requirements specification for systems development has always been a difficult problem and continues to be a problem in the object-oriented software development paradigm. The problem persists because there is a paucity of formal, object-oriented specification models that are seamlessly integrated into the development cycle and that are supported by automated tools. Here, we present a formal object-oriented specification model (OSS), which is an extension of an object-oriented analysis model (OSA), and which is supported by a tool (IPOST) that automatically generates a prototype from an OSA model instance, lets the user execute the prototype, and permits the user to refine the OSA model instance to generate a requirements specification. This technique leverages the benefits of a formal model, an object-oriented model, a seamless model, a graphical diagrammatic model, incremental development, and CASE tool support to facilitate the development of requirements specifications.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Implementing reactive closed-system specifications The objective of implementation is to bridge the gap between the specification model and available implementation technology. The ongoing trend in electronic design automation is to widen this gap by introducing more abstract specification models to produce increasingly complex systems within shorter time spans. At the same time, advances in implementation tools and methods have been less dramatic. In this paper, we discuss a case study that models an access cycle in the Industry Standard Architecture bus and present systematic methods for implementing state-based specifications in software and hardware. We focus on the formal properties known as safety—characterizations of the kind ‘nothing bad ever happens’—and liveness—characterizations of the kind ‘something good eventually happens’. Particular emphasis is laid on liveness properties and scheduling since these are the driving force that make things happen in operational specifications. We represent specifications graphically using the Temporal Logic of Actions, a logic that models system behaviour by sequences of states.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Exploring interactive information retrieval: an integrated approach to interface design and interaction analysis In this paper, we describe a novel methodology that integrates the design of the (i) user interface; (ii) interaction logger; and (iii) log analyzer. It is based on formalizing, via UML state diagrams, the functionality that is supported by an interactive system, deriving XML schemas for capturing the interactions in activity logs and deriving log parsers that reveal the system states and the state transitions that took place during the interaction. Subsequent analysis of state activities and state transitions captured in the logs can be used to study the user-system interaction or to test some research hypothesis. While this approach is rather general and can be applied in studying a variety of interactive systems, it has been devised and applied in research work on exploratory information retrieval, where the focus is on studying the interaction and on finding interaction patterns.
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Connections in acyclic hypergraphs We demonstrate a sense in which the equivalence between blocks (subgraphs without articulation points) and biconnected components (subgraphs in which there are two edge-disjoint paths between any pair of nodes) that holds in ordinary graph theory can be generalized to hypergraphs. The result has an interpretation for relational databases that the universal relations described by acyclic join dependencies are exactly those for which the connections among attributes are defined uniquely. We also exhibit a relationship between the process of Graham reduction (Graham, 1979) of hypergraphs and the process of tableau reduction (Aho, Sagiv and Ullman, 1979) that holds only for acyclic hypergraphs.
Integrity Checking in a Logic-Oriented ER Model
Query Optimization Techniques Utilizing Path Indexes in Object-Oriented Database Systems We propose query optimization techniquesthat fully utilize the advantages of path indexesin object-oriented database systems. Althoughpath indexes provide an efficient accessto complex objects, little research has beendone on query optimization that fully utilizepath indexes. We first devise a generalizedindex intersection technique, adapted to thestructure of the path index extended fromconventional indexes, for utilizing multiple(path) indexes to access each class in a query.We...
A Graphical Query Language Based on an Extended E-R Model
Levelled Entity Relationship Model The Entity-Relationship formalism, introduced in the mid-seventies,is an extensively used tool for database design. The database communityis now involved in building the next generation of databasesystems. However, there is no effective formalism similar to ER formodeling the complex data in these systems. We propose the LeveledEntity Relationship (LER) formalism as a step towards fulfilling sucha need.An essential characteristic of these next-generation systems is thata data element is ...
Behavioural Constraints Using Events
Proving Liveness Properties of Concurrent Programs
Seamless visual object-oriented behavior modeling for distributed software systems To ease the development of distributed systems, the visual notions for the structural aspects of object-oriented analysis and design should be combined with techniques for handling concurrency and distribution. A novel approach and language for the visual design of distributed software systems is introduced and illustrated by means of an example. The language of OCoNs (Object Coordination Nets) is integrated into the structuring mechanisms of the UML (Unified Modeling Language) standard for object-oriented analysis and design. Such an object-oriented notation is crucial for handling complex software systems and can be extended with the graphical expressive power of Petri nets to also describe concurrency and coordination. The same visual language is used to specify the interfaces and contracts of software components, the resource handling within a component as well as the control flow of services
Argonaute: graphical description, semantics and verification of reactive systems by using a process algebra The Argonaute system is specifically designed to describe, specify and verify reactivesystems such as communication protocols, real-time applications, man-machine interfaces,. . . It is based upon the Argos graphical language, whose syntax relies on theHigraphs formalism by D. Harel [HAR88], and whose semantics is given by using a processalgebra. Automata form the basic notion of the language, and hierarchical or paralleldecompositions are given by using operators of the algebra. The...
Reusing analogous components Using formal specifications to represent software components facilitates the determination of reusability because they more precisely characterize the functionality of the software, and the well-defined syntax makes processing amenable to automation. This paper presents an approach, based on formal methods, to the search, retrieval, and modification of reusable software components. From a two-tiered hierarchy of reusable software components, the existing components that are analogous to the query specification are retrieved from the hierarchy. The specification for an analogous retrieved component is compared to the query specification to determine what changes need to be applied to the corresponding program component in order to make it satisfy the query specification.
A Software Engineering View of Data Base Management This paper examines the field of data base management from the perspective of software engineering. Key topics in software engineering are related to specific activities in data base design and implementation. An attempt is made to show the similarities between steps in the creation of systems involving data bases and other kinds of software systems. It is argued that there is a need to unify thinking about data base systems with other kinds of software systems and tools in order to build high quality systems. The progrming language PLAIN and its programning environment is introduced as a tool for integrating notions of programning languages, data base management, and software engineering.
Logarithmical hopping encoding: a low computational complexity algorithm for image compression LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber-Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O(n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Report from the Joint W3C/IETF URI Planning Interest Group: Uniform Resource Identifiers (URIs), URLs, and Uniform Resource Names (URNs): Clarifications and Recommendations
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.105264
0.100071
0.100071
0.100071
0.050052
0.025661
0.002946
0.000057
0.000025
0.000005
0.000001
0
0
0
A Proof System for Communicating Sequential Processes An axiomatic proof system is presented for proving partial correctness and absence of deadlock (and failure) of communicating sequential processes. The key (meta) rule introduces cooperation between proofs, a new concept needed to deal with proofs about synchronization by message passing. CSP's new convention for distributed termination of loops is dealt with. Applications of the method involve correctness proofs for two algorithms, one for distributed partitioning of sets, the other for distributed computation of the greatest common divisor of n numbers.
A Systematic Approach to the Development of Event Based Applications We propose a novel framework (LECAP)for the development of event-based applications. Our approach offers the following advantages over existing approaches: 1) it supports a while-parallel language, 2) the reasoning allows a dynamic (instead of static) binding of programs to events, 3) it is oriented towards stepwise development of systems, and 4) the underlying logic supports the composition of specifications. The event based architectural style has been recognized as fostering the development of large-scale and complex systems by. loosely, coupling their components. It is therefore increasingly deployed in various environments such as middleware for mobile computing, message oriented middleware, integration frameworks, communication standards, and commercial toolkits. Current approaches to the development of event-based applications are ad hoc and do not support reasoning about their correctness. The LECAP approach is intended to solve this problem through a compositional and stepwise approach to specification and verification of event-based applications.
Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement.
Procedures and concurrency: A study in proof Without Abstract
A proof system for concurrent ADA programs A subset of ADA is introduced, ADA-CF , to study the basic synchronization and communication primitive of ADA , the rendezvous. Basing ourselves on the techniques introduced by Apt, Francez and de Roever for their CSP proof system, we develop a Hoare-style proof system for proving partial correctness properties which is sound and relatively complete. The proof system is then extended to deal with safety, deadlock, termination and failure. No prior exposure of the reader to parallel program proving techniques is presupposed. Two non-trivial example proofs are given of ADA-CF programs; the first one concerns a buffered producer-consumer algorithm, the second one a parallel sorting algorithm due to Brinch Hansen. Features of ADA expressing dynamic process creation and realtime constraints are not covered by our proof methods. Consequently, we do not claim that the methods described can be extended to full ADA without serious additional further research.
Fairness and hyperfairness in multi-party interactions In this paper, a new fairness notion is proposed for languages with multi-party interactions as the sole interprocess synchronization and communication primitive. The main advantage of this fairness notion is the elimination of starvation occurring solely due to race conditions (i.e., ordering of independent actions). Also, this is the first fairness notion for such languages which is fully-adequate with respect to the criteria presented in [AFK88]. The paper defines the notion, proves its properties, and presents examples of its usefulness.
Hierarchical correctness proofs for distributed algorithms This thesis introduces a new model for distributed computation in asynchronous networks, the input-output automaton. This simple, powerful model captures in a novel way the game-theoretical interaction between a system and its environment, and allows fundamental properties of distributed computation such as fair computation to be naturally expressed. Furthermore, this model can be used to construct modular, hierarchical correctness proofs of distributed algorithms. This thesis defines the input-output automaton model, and presents an interesting example of how this model can be used to construct such proofs.
Distributed Termination Discussed is a distributed system based on communication among disjoint processes, where each process is capable of achieving a post-condition of its local space in such a way that the conjunction of local post-conditions implies a global post-condition of the whole system. The system is then augmented with extra control communication in order to achieve distributed termination, without adding new channels of communication. The algorithm is applied to a problem of constructing a sorted partition.
Justifications for the event-b modelling notation Event-B is a notation and method for discrete systems modelling by refinement. The notation has been carefully designed to be simple and easily teachable. The simplicity of the notation takes also into account the support by a modelling tool. This is important because Event-B is intended to be used to create complex models. Without appropriate tool support this would not be possible. This article presents justifications and explanations for the choices that have been made when designing the Event-B notation.
Recording the reasons for design decisions We outline a generic model for representing design deliberation and the relation between deliberation and the generation of method-specific artifacts. A design history is regarded as a network consisting of artifacts and deliberation nodes. Artifacts represent specifications or design documents. Deliberation nodes represent issues, alternatives or justifications. Existing artifacts give rise to issues about the evolving design, an alternative is one of several positions that respond to the issue (perhaps calling for the creation or modification of an artifact), and a justification is a statement giving the reasons for and against the related alternative. The model is applied to the development of a text formatter. The example necessitates some tailoring of the generic model to the method adopted in the development, Liskov and Guttag's design method. We discuss the experiment and the method-specific extensions. The example development has been represented in hypertext and as a Prolog database, the two representations being shown to complement each other. We conclude with a discussion of the relation between this model and other work, and the implications for tool support and methods.
An ontological model of an information system An ontological model of an information system that provides precise definitions of fundamental concepts like system, subsystem, and coupling is proposed. This model is used to analyze some static and dynamic properties of an information system and to examine the question of what constitutes a good decomposition of an information system. Some of the major types of information system formalisms that bear on the authors' goals and their respective strengths and weaknesses relative to the model are briefly reviewed. Also articulated are some of the fundamental notions that underlie the model. Those basic notions are then used to examine the nature and some dynamics of system decomposition. The model's predictive power is discussed.
The LOCO-I lossless image compression algorithm: principles and standardization into JPEG-LS LOCO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. It is conceived as a “low complexity projection” of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm “enjoys the best of both worlds.” It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golomb (1966) type codes, which are adaptively chosen, and an embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCO-I, and its standardization into JPEC-LS
Distributed Mobile Communication Base Station Diagnosis and Monitoring Using Multi-agents Most inherently distributed systems require self diagnosis and on-line monitoring. This is especially true in the domains of power transmission and mobile communication. Much effort has been expended in developing on-site monitoring systems for distributed power transformers and mobile communication base stations.In this paper, a new approach has been employed to implement the autonomous self diagnosis and on-site monitoring using multi-agents on mobile communication base stations.
Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems).
1.018577
0.016786
0.016786
0.015583
0.009405
0.003633
0.001192
0.00015
0.00004
0.000016
0.000004
0
0
0
A process modeling language for large process control systems A process modeling language (PML) has been developed which aids the analyst in modeling the processes of large process control systems. Because PML is based on a conceptual framework close to the customer's view of controlled processes, a PML model is an effective communication medium between the analyst and the customer. PML allows a process to be decomposed into activities and a composite activity to be decomposed into more primitive activities. The structuring facility for these decompositions is the same. PML represents activity sequences by scenarios and uses conditions to control their start and end. The modeling of various operation sequences, interprocess sequence control, and operators' interactions is made easy. The condition facility also facilitates the specification of various timing constraints. Therefore, PML is a valuable aid for the requirements analysis of large process control systems
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On the Facial Thue Choice Number of Plane Graphs Via Entropy Compression Method. Let G be a plane graph. A vertex-colouring $$\\varphi $$¿ of G is called facial non-repetitive if for no sequence $$r_1 r_2 \\ldots r_{2n}$$r1r2¿r2n, $$n\\ge 1$$n¿1, of consecutive vertex colours of any facial path it holds $$r_i=r_{n+i}$$ri=rn+i for all $$i=1,2,\\ldots ,n$$i=1,2,¿,n. A plane graph G is facial non-repetitivelyk-choosable if for every list assignment $$L:V\\rightarrow 2^{\\mathbb {N}}$$L:V¿2N with minimum list size at least k there is a facial non-repetitive vertex-colouring $$\\varphi $$¿ with colours from the associated lists. The facial Thue choice number, $$\\pi _{fl}(G)$$¿fl(G), of a plane graph G is the minimum number k such that G is facial non-repetitively k-choosable. We use the so-called entropy compression method to show that $$\\pi _{fl} (G)\\le c \\varDelta $$¿fl(G)≤cΔ for some absolute constant c and G a plane graph with maximum degree $$\\varDelta $$Δ. Moreover, we give some better (constant) upper bounds on $$\\pi _{fl} (G)$$¿fl(G) for special classes of plane graphs.
On the Facial Thue Choice Index via Entropy Compression. A sequence is nonrepetitive if it contains no identical consecutive subsequences. An edge coloring of a path is nonrepetitive if the sequence of colors of its consecutive edges is nonrepetitive. By the celebrated construction of Thue, it is possible to generate nonrepetitive edge colorings for arbitrarily long paths using only three colors. A recent generalization of this concept implies that we may obtain such colorings even if we are forced to choose edge colors from any sequence of lists of size 4 (while sufficiency of lists of size 3 remains an open problem). As an extension of these basic ideas, Havet, Jendrol', Sotak, and Skrabul'akova proved that for each plane graph, eight colors are sufficient to provide an edge coloring so that every facial path is nonrepetitively colored. In this article, we prove that the same is possible from lists, provided that these have size at least 12. We thus improve the previous bound of 291 (proved by means of the Lovasz Local Lemma). Our approach is based on the Moser-Tardos entropy-compression method and its recent extensions by Grytczuk, Kozik, and Micek, and by Dujmovic, Joret, Kozik, and Wood. (C) 2013 Wiley Periodicals, Inc.
Entropy compression method applied to graph colorings. Based on the algorithmic proof of Lov\u0027asz local lemma due to Moser and Tardos, Esperet and Parreau developed a framework to prove upper bounds for several chromatic numbers (in particular acyclic chromatic index, star chromatic number and Thue chromatic number) using the so-called \emph{entropy compression method}. Inspired by this work, we propose a more general framework and a better analysis. This leads to improved upper bounds on chromatic numbers and indices. In particular, every graph with maximum degree Δ has an acyclic chromatic number at most 32Δ43+O(Δ), and a non-repetitive chromatic number at most Δ2+1.89Δ53+O(Δ43). Also every planar graph with maximum degree Δ has a facial Thue chromatic number at most Δ+O(Δ12) and facial Thue chromatic index at most 10.
Nonrepetitive colorings of trees A coloring of the vertices of a graph G is nonrepetitive if no path in G forms a sequence consisting of two identical blocks. The minimum number of colors needed is the Thue chromatic number, denoted by @p(G). A famous theorem of Thue asserts that @p(P)=3 for any path P with at least four vertices. In this paper we study the Thue chromatic number of trees. In view of the fact that @p(T) is bounded by 4 in this class we aim to describe the 4-chromatic trees. In particular, we study the 4-critical trees which are minimal with respect to this property. Though there are many trees T with @p(T)=4 we show that any of them has a sufficiently large subdivision H such that @p(H)=3. The proof relies on Thue sequences with additional properties involving palindromic words. We also investigate nonrepetitive edge colorings of trees. By a similar argument we prove that any tree has a subdivision which can be edge-colored by at most @D+1 colors without repetitions on paths.
Nonrepetitive vertex colorings of graphs We prove new upper bounds on the Thue chromatic number of an arbitrary graph and on the facial Thue chromatic number of a plane graph in terms of its maximum degree.
Oriented graph coloring An oriented k -coloring of an oriented graph G (that is a digraph with no cycle of length 2) is a partition of its vertex set into k subsets such that (i) no two adjacent vertices belong to the same subset and (ii) all the arcs between any two subsets have the same direction. We survey the main results that have been obtained on oriented graph colorings.
Formal Derivation of Strongly Correct Concurrent Programs. Summary  A method is described for deriving concurrent programs which are consistent with the problem specifications and free from deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary variables is also given. The applicability of the techniques presented is discussed through various examples; their use for verification purposes is illustrated as well.
The lattice of data refinement We define a very general notion of data refinement which comprises the traditionalnotion of data refinement as a special case. Using the concepts of duals and adjoints wedefine converse commands and a find a symmetry between ordinary data refinement and adual (backward) data refinement. We show how ordinary and backward data refinementare interpreted as simulation and we derive rules for the piecewise data refinement ofprograms. Our results are valid for a general language, covering...
Logarithmical hopping encoding: a low computational complexity algorithm for image compression LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber-Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O(n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Class-based n-gram models of natural language We address the problem of predicting a word from previous words in a sample of text. In particular, we discuss n-gram models based on classes of words. We also discuss several statistical algorithms for assigning words to classes based on the frequency of their co-occurrence with other words. We find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings, depending on the nature of the underlying statistics.
Reflection and semantics in LISP
Navigating hierarchically clustered networks through fisheye and full-zoom methods Many information structures are represented as two-dimensional networks (connected graphs) of links and nodes. Because these network tend to be large and quite complex, people often perfer to view part or all of the network at varying levels of detail. Hierarchical clustering provides a framework for viewing the network at different levels of detail by superimposing a hierarchy on it. Nodes are grouped into clusters, and clusters are themselves place into other clusters. Users can then navigate these clusters until an appropiate level of detail is reached. This article describes an experiment comparing two methods for viewing hierarchically clustered networks. Traditional full-zoom techniques provide details of only the current level of the hierarchy. In contrast, fisheye views, generated by the “variable-zoom” algorithm described in this article, provide information about higher levels as well. Subjects using both viewing methods were given problem-solving tasks requiring them to navigate a network, in this case, a simulated telephone system, and to reroute links in it. Results suggest that the greater context provided by fisheye views significantly improved user performance. Users were quicker to complete their task and made fewer unnecessary navigational steps through the hierarchy. This validation of fisheye views in important for designers of interfaces to complicated monitoring systems, such as control rooms for supervisory control and data acquistion systems, where efficient human performance is often critical. However, control room operators remained concerned about the size and visibility tradeoffs between the fine room operators remained concerned about the size and visibility tradeoffs between the fine detail provided by full-zoom techniques and the global context supplied by fisheye views. Specific interface feaures are required to reconcile the differences.
A Task-Based Methodology for Specifying Expert Systems A task-based specification methodology for expert system specification that is independent of the problem solving architecture, that can be applied to many expert system applications, that focuses on what the knowledge is, not how it is implemented, that introduces the major concepts involved gradually, and that supports verification and validation is discussed. To evaluate the methodology, a specification of R1/SOAR, an expert system that reimplements a major portion of the R1 expert system, was reverse engineered.<>
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.117
0.084667
0.07
0.009958
0.00519
0.0015
0
0
0
0
0
0
0
0
A knowledge-based software environment (KBSE) for designing concurrent processes
On Overview of KRL, a Knowledge Representation Language
Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks.
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms.
Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.
Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed.
Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation.
Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical...
Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic.
3-D transformations of images in scanline order
Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On Automatic and Interactive Design of Communication Systems This paper presents a transformational approach to the design of distributed systemswhere environment and concurrently running components communicate via synchronousmessage passing along directed channels. System specifications that combinetrace-based with state-based reasoning are gradually modified by application of transfromationrules until occam-like programs are achieved finally. We consider interactiveand automatic aspects of such a design process and illustrate our approach by...
A Case Study in Transformational Design of Concurrent Systems . We explain a transformationalapproach to the design and verification ofcommunicating concurrent systems. Thetransformations start form specifications thatcombine trace-based with state-based assertionalreasoning about the desired communicationbehaviour, and yield concurrent implementations.We illustrate our approach by acase study proving correctness of implementationsof safe and regular registers allowingconcurrent writing and reading phases, originallydue to Lamport.1...
Stepwise Refinement of Action Systems A method for the formal development of provably correct parallel algorithms by stepwise refinement is presented. The entire derivation procedure is carried out in the context of purely sequential programs. The resulting parallel algorithms can be efficiently executed on different architectures. The methodology is illustrated by showing the main derivation steps in a construction of a parallel algorithm for matrix multiplication.
On Overview of KRL, a Knowledge Representation Language
Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls.
Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Symbolic Model Checking Symbolic model checking is a powerful formal specification and verification method that has been applied successfully in several industrial designs. Using symbolic model checking techniques it is possible to verify industrial-size finite state systems. State spaces with up to 1030 states can be exhaustively searched in minutes. Models with more than 10120 states have been verified using special techniques.
2009 Data Compression Conference (DCC 2009), 16-18 March 2009, Snowbird, UT, USA
Voice as sound: using non-verbal voice input for interactive control We describe the use of non-verbal features in voice for direct control of interactive applications. Traditional speech recognition interfaces are based on an indirect, conversational model. First the user gives a direction and then the system performs certain operation. Our goal is to achieve more direct, immediate interaction like using a button or joystick by using lower-level features of voice such as pitch and volume. We are developing several prototype interaction techniques based on this idea, such as "control by continuous voice", "rate-based parameter control by pitch," and "discrete parameter control by tonguing." We have implemented several prototype systems, and they suggest that voice-as-sound techniques can enhance traditional voice recognition approach.
Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects.
Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems.
Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.028571
0.0025
0
0
0
0
0
0
0
0
0
0
0
Modeling adaptive behaviors in Context UNITY Context-aware computing refers to a paradigm in which applications sense aspects of the environment and use this information to adjust their behavior in response to changing circumstances. In this paper, we present a formal model and notation (Context UNITY) for expressing quintessential aspects of context-aware computations; existential quantification, for instance, proves to be highly effective in capturing the notion of discovery in open systems. Furthermore, Context UNITY treats context in a manner that is relative to the specific needs of an individual application and promotes an approach to context maintenance that is transparent to the application. In this paper, we construct the model from first principles, introduce its proof logic, and demonstrate how the model can be used as an effective abstraction tool for context-aware applications and middleware.
Mining and analysing security goal models in health information systems Large-scale health information software systems have to adhere to complex, multi-lateral security and privacy regulations. Such regulations are typically defined in form of natural language (NL) documents. There is little methodological support for bridging the gap between NL regulations and the requirements engineering methods that have been developed by the software engineering community. This paper presents a method and tool support, which are aimed at narrowing this gap by mining and analysing structured security requirements in unstructured NL regulations. A key value proposition of our approach is that requirements are mined “in-place”, i.e., the structured model is tightly integrated with the NL text. This results in better traceability and enables an iterative rather than waterfall-like requirements extraction and analysis process. The tool and method have been evaluated in context of a real-world, large scale project, i.e., the Canadian Electronic Health Record.
Building problem domain ontology from security requirements in regulatory documents Establishing secure systems assurance based on Certification and Accreditation (C&A) activities, requires effective ways to understand the enforced security requirements, gather relevant evidences, perceive related risks in the operational environment, and reveal their causal relationships with other domain concepts. However, C&A security requirements are expressed in multiple regulatory documents with complex interdependencies at different levels of abstractions that often result in subjective interpretations and non-standard implementations. Their non-functional nature imposes complex constraints on the emergent behavior of software-intensive systems, making them hard to understand, predict, and control. To address these issues, we present novel techniques from software requirements engineering and knowledge engineering for systematically extracting, modeling, and analyzing security requirements and related concepts from multiple C&A-enforced regulatory documents. We employ advanced ontological engineering processes as our primary modeling technique to represent complex and diverse characteristics of C&A security requirements and related domain knowledge. We apply our methodology to build problem domain ontology from regulatory documents enforced by the Department of Defense Information Technology Security Certification and Accreditation Process (DITSCAP).
GRAIL/KAOS: an environment for goal-driven requirements engineering The KAOS methodology provides a language and method for goal-driven requirements elaboration. GRAIL, is an environment under development to support the KAOS methodology. The GRAIL kernel combines a graphical view, a textual view, an abstract syntax view, and an object base view of specifications. GRAIL has been used to elicit and specify the requirements of several real, industrial projects. THE KAOS METHODOLOGY The KAOS methodology provides a specification language for capturing why, who, and when aspects in addition to the usual what requirements; a goal-driven elaboration method; and meta-level knowledge used for local guidance during method enactment [l, 21. The language provides a rich ontology for capturing requirements in terms of goals, constraints, objects, actions, agents, etc. Links between requirements are represented as well to capture refinements, conflicts, operationalizations, responsibility assignments, etc. The KAOS language is a multi-paradigm specification language with a two-level structure: an outer semantic net layer for declaring concepts, their attributes and links to other concepts; an inner formal assertion layer for formally defining the concept. The latter combines a real-time temporal logic for the specification of goals, constraints, and objects, and standard pre-/postconditions for the specification of actions and their strengthening to ensure the constraints. The method roughly consists of (i) identifying and refining goals progressively until constraints that are assignable to individual agents are obtained, (ii) identifying objects and actions progressively from goals, (iii) deriving requirements on the objects and actions to meet the constraints, and (iv) assigning the constraints, objects and actions to the agents composing the system. Meta-level knowledge is used to guide the elaboration process; it takes the form of conceptual taxonomies, well-formedness rules and tactics to select among alternatives.
Formal Derivation of Strongly Correct Concurrent Programs. Summary  A method is described for deriving concurrent programs which are consistent with the problem specifications and free from deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary variables is also given. The applicability of the techniques presented is discussed through various examples; their use for verification purposes is illustrated as well.
A mathematical perspective for software measures research Basic principles which necessarily underlie software measures research are analysed. In the prevailing paradigm for the validation of software measures, there is a fundamental assumption that the sets of measured documents are ordered and that measures should report these orders. The authors describe mathematically, the nature of such orders. Consideration of these orders suggests a hierarchy of software document measures, a methodology for developing new measures and a general approach to the analytical evaluation of measures. They also point out the importance of units for any type of measurement and stress the perils of equating document structure complexity and psychological complexity
Distributed snapshots: determining global states of distributed systems This paper presents an algorithm by which a process in a distributed system determines a global state of the system during a computation. Many problems in distributed systems can be cast in terms of the problem of detecting global states. For instance, the global state detection algorithm helps to solve an important class of problems: stable property detection. A stable property is one that persists: once a stable property becomes true it remains true thereafter. Examples of stable properties are “computation has terminated,” “ the system is deadlocked” and “all tokens in a token ring have disappeared.” The stable property detection problem is that of devising algorithms to detect a given stable property. Global state detection can also be used for checkpointing.
ACE: building interactive graphical applications
Duality in specification languages: a lattice-theoretical approach A very general lattice-based language of commands, based on theprimitive operations of substitution and test for equality, isconstructed. This base language permits unbounded nondeterminism,demonic and angelic nondeterminism. A dual language permitting miraclesis constructed. Combining these two languages yields an extended baselanguage which is complete, in the sense that all monotonic predicatetransformers can be constructed in it. The extended base languageprovides a unifying framework for various specification languages; weshow how two Dijkstra-style specification languages can be embedded init.—Authors' Abstract
From Action Systems to Modular Systems Action systems are used to extend program refinement methods for sequential programs, as described in the refinement calculus, to parallel and reactive system refinement. They provide a general description of reactive systems, capable of modeling terminating, possibly aborting and infinitely repeating systems. We show how to extend the action system model to refinement of modular systems. A module may export and import variables, it may provide access procedures for other modules, and it may itself access procedures of other modules. Modules may have autonomous internal activity and may execute in parallel or in sequence. Modules may be nested within each other. They may communicate by shared variables, shared actions, a generalized form of remote procedure calls and by persistent data structures. Both synchronous and asynchronous communication between modules is supported. The paper shows how a single framework can be used for both the specification of large systems, the modular decomposition of the system into smaller units and the refinement of the modules into program modules that can be described in a standard programming language and executed on standard hardware.
A Software Development Environment for Improving Productivity First Page of the Article
Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered.
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd.
Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie...
1.2
0.1
0.066667
0.016667
0
0
0
0
0
0
0
0
0
0