Query Text
stringlengths 9
8.71k
| Ranking 1
stringlengths 14
5.31k
| Ranking 2
stringlengths 11
5.31k
| Ranking 3
stringlengths 11
8.42k
| Ranking 4
stringlengths 17
8.71k
| Ranking 5
stringlengths 14
4.95k
| Ranking 6
stringlengths 14
8.42k
| Ranking 7
stringlengths 17
8.42k
| Ranking 8
stringlengths 10
5.31k
| Ranking 9
stringlengths 9
8.42k
| Ranking 10
stringlengths 9
8.42k
| Ranking 11
stringlengths 10
4.11k
| Ranking 12
stringlengths 14
8.33k
| Ranking 13
stringlengths 17
3.82k
| score_0
float64 1
1.25
| score_1
float64 0
0.25
| score_2
float64 0
0.25
| score_3
float64 0
0.24
| score_4
float64 0
0.24
| score_5
float64 0
0.24
| score_6
float64 0
0.21
| score_7
float64 0
0.1
| score_8
float64 0
0.02
| score_9
float64 0
0
| score_10
float64 0
0
| score_11
float64 0
0
| score_12
float64 0
0
| score_13
float64 0
0
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
A New Look at Physical Layer Security, Caching, and Wireless Energy Harvesting for Heterogeneous Ultra-dense Networks. Heterogeneous ultra-dense networks enable ultra-high data rates and ultra-low latency through the use of dense sub-6 GHz and millimeter-wave small cells with different antenna configurations. Existing work has widely studied spectral and energy efficiency in such networks and shown that high spectral and energy efficiency can be achieved. This article investigates the benefits of heterogeneous ult... | Cooperative wireless communications: a cross-layer approach This article outlines one way to address these problems by using the notion of cooperation between wireless nodes. In cooperative communications, multiple nodes in a wireless network work together to form a virtual antenna array. Using cooperation, it is possible to exploit the spatial diversity of the traditional MIMO techniques without each node necessarily having multiple antennas. Multihop networks use some form of cooperation by enabling intermediate nodes to forward the message from source to destination. However, cooperative communication techniques described in this article are fundamentally different in that the relaying nodes can forward the information fully or in part. Also the destination receives multiple versions of the message from the source, and one or more relays and combines these to obtain a more reliable estimate of the transmitted signal as well as higher data rates. The main advantages of cooperative communications are presented | On the Performance of Cognitive Underlay Multihop Networks with Imperfect Channel State Information. This paper proposes and analyzes cognitive multihop decode-and-forward networks in the presence of interference due to channel estimation errors. To reduce interference on the primary network, a simple yet effective back-off control power method is applied for secondary multihop networks. For a given threshold of interference probability at the primary network, we derive the maximum back-off control power coefficient, which provides the best performance for secondary multihop networks. Moreover, it is shown that the number of hops for secondary network is upper-bounded under the fixed settings of the primary network. For secondary multihop networks, new exact and asymptotic expressions for outage probability (OP), bit error rate (BER) and ergodic capacity over Rayleigh fading channels are derived. Based on the asymptotic OP and BEP, a pivotal conclusion is reached that the secondary multihop network offers the same diversity order as compared with the network without back off. Finally, we verify the performance analysis through various numerical examples which confirm the correctness of our analysis for many channel and system settings and provide new insight into the design and optimization of cognitive multihop networks. | Robust Secure Beamforming in MISO Full-Duplex Two-Way Secure Communications Considering worst-case channel uncertainties, we investigate the robust secure beamforming design problem in multiple-input-single-output full-duplex two-way secure communications. Our objective is to maximize worst-case sum secrecy rate under weak secrecy conditions and individual transmit power constraints. Since the objective function of the optimization problem includes both convex and concave terms, we propose to transform convex terms into linear terms. We decouple the problem into four optimization problems and employ alternating optimization algorithm to obtain the locally optimal solution. Simulation results demonstrate that our proposed robust secure beamforming scheme outperforms the non-robust one. It is also found that when the regions of channel uncertainties and the individual transmit power constraints are sufficiently large, because of self-interference, the proposed two-way robust secure communication is proactively degraded to one-way communication. | Secure Relaying in Multihop Communication Systems. This letter considers improving end-to-end secrecy capacity of a multihop decode-and-forward relaying system. First, a secrecy rate maximization problem without transmitting artificial noise (AN) is considered, following which the AN-aided secrecy schemes are proposed. Assuming that global channel state information (CSI) is available, an optimal power splitting solution is proposed. Furthermore, an iterative joint optimization of transmit power and power splitting coefficient has also been considered. For the scenario of no eavesdropper's CSI, we provide a suboptimal solution. The simulation results demonstrate that the AN-aided optimal scheme outperforms other schemes. | Artificial Noise-Aided Physical Layer Security in Underlay Cognitive Massive MIMO Systems with Pilot Contamination. In this paper, a secure communication model for cognitive multi-user massive multiple-input multiple-output (MIMO) systems with underlay spectrum sharing is investigated. A secondary (cognitive) multi-user massive MIMO system is operated by using underlay spectrum sharing within a primary (licensed) multi-user massive MIMO system. A passive multi-antenna eavesdropper is assumed to be eavesdropping upon either the primary or secondary confidential transmissions. To this end, a physical layer security strategy is provisioned for the primary and secondary transmissions via artificial noise (AN) generation at the primary base-station (PBS) and zero-forcing precoders. Specifically, the precoders are constructed by using the channel estimates with pilot contamination. In order to degrade the interception of confidential transmissions at the eavesdropper, the AN sequences are transmitted at the PBS by exploiting the excess degrees-of-freedom offered by its massive antenna array and by using random AN shaping matrices. The channel estimates at the PBS and secondary base-station (SBS) are obtained by using non-orthogonal pilot sequences transmitted by the primary user nodes (PUs) and secondary user nodes (SUs), respectively. Hence, these channel estimates are affected by intra-cell pilot contamination. In this context, the detrimental effects of intra-cell pilot contamination and channel estimation errors for physical layer secure communication are investigated. For this system set-up, the average and asymptotic achievable secrecy rate expressions are derived in closed-form. Specifically, these performance metrics are studied for imperfect channel state information (CSI) and for perfect CSI, and thereby, the secrecy rate degradation due to inaccurate channel knowledge and intra-cell pilot contamination is quantified. Our analysis reveals that a physical layer secure communication can be provisioned for both primary and secondary massive MIMO systems even with the channel estimation errors and pilot contamination. | A New Look at Dual-Hop Relaying: Performance Limits with Hardware Impairments. Physical transceivers have hardware impairments that create distortions which degrade the performance of communication systems. The vast majority of technical contributions in the area of relaying neglect hardware impairments and, thus, assume ideal hardware. Such approximations make sense in low-rate systems, but can lead to very misleading results when analyzing future high-rate systems. This paper quantifies the impact of hardware impairments on dual-hop relaying, for both amplify-and-forward and decode-and-forward protocols. The outage probability (OP) in these practical scenarios is a function of the effective end-to-end signal-to-noise-and-distortion ratio (SNDR). This paper derives new closed-form expressions for the exact and asymptotic OPs, accounting for hardware impairments at the source, relay, and destination. A similar analysis for the ergodic capacity is also pursued, resulting in new upper bounds. We assume that both hops are subject to independent but non-identically distributed Nakagami-m fading. This paper validates that the performance loss is small at low rates, but otherwise can be very substantial. In particular, it is proved that for high signal-to-noise ratio (SNR), the end-to-end SNDR converges to a deterministic constant, coined the SNDR ceiling, which is inversely proportional to the level of impairments. This stands in contrast to the ideal hardware case in which the end-to-end SNDR grows without bound in the high-SNR regime. Finally, we provide fundamental design guidelines for selecting hardware that satisfies the requirements of a practical relaying system. | The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism. | Constraint logic programming for reasoning about discrete event processes The purpose of this paper is to show that constraint logic programming is a useful computational logic for modeling, simulating, and verifying real-time discrete event processes. The designer's knowledge about discrete event processes can be represented by a constraint logic program in a fashion that stays close to the mathematical definition of the processes, and can be used to semiautomate verification of possibly infinite-state systems. The constraint language CPL( R ) is used to illustrate verification techniques. | Software process modeling: principles of entity process models | Animation of Object-Z Specifications with a Set-Oriented Prototyping Language | 3rd international workshop on software evolution through transformations: embracing change Transformation-based techniques such as refactoring, model transformation and model-driven development, architectural reconfiguration, etc. are at the heart of many software engineering activities, making it possible to cope with an ever changing environment. This workshop provides a forum for discussing these techniques, their formal foundations and applications. | One VM to rule them all Building high-performance virtual machines is a complex and expensive undertaking; many popular languages still have low-performance implementations. We describe a new approach to virtual machine (VM) construction that amortizes much of the effort in initial construction by allowing new languages to be implemented with modest additional effort. The approach relies on abstract syntax tree (AST) interpretation where a node can rewrite itself to a more specialized or more general node, together with an optimizing compiler that exploits the structure of the interpreter. The compiler uses speculative assumptions and deoptimization in order to produce efficient machine code. Our initial experience suggests that high performance is attainable while preserving a modular and layered architecture, and that new high-performance language implementations can be obtained by writing little more than a stylized interpreter. | New results on stability analysis for systems with discrete distributed delay The integral inequality technique is widely used to derive delay-dependent conditions, and various integral inequalities have been developed to reduce the conservatism of the conditions derived. In this study, a new integral inequality was devised that is tighter than existing ones. It was used to investigate the stability of linear systems with a discrete distributed delay, and a new stability condition was established. The results can be applied to systems with a delay belonging to an interval, which may be unstable when the delay is small or nonexistent. Three numerical examples demonstrate the effectiveness and the smaller conservatism of the method. | 1.24 | 0.24 | 0.24 | 0.24 | 0.24 | 0.24 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
SpeakerLDA: Discovering Topics in Transcribed Multi-Speaker Audio Contents Topic models such as Latent Dirichlet Allocation (LDA) have been extensively used for characterizing text collections according to the topics discussed in documents. Organizing documents according to topic can be applied to different information access tasks such as document clustering, content-based recommendation or summarization. Spoken documents such as podcasts typically involve more than one speaker (e.g., meetings, interviews, chat shows or news with reporters). This paper presents a work-in-progress based on a variation of LDA that includes in the model the different speakers participating in conversational audio transcripts. Intuitively, each speaker has her own background knowledge which generates different topic and word distributions. We believe that informing a topic model with speaker segmentation (e.g., using existing speaker diarization techniques) may enhance discovery of topics in multi-speaker audio content. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Agent-based resource discovery architecture for environmental emergency management An agent-based environmental emergency management framework is introduced as a loosely coupled collection of agents that can cooperate to prepare for and response to environmental emergency situations. In this framework, resources play a critical role because they are the foundation for taking action in environmental emergencies. Therefore, an agent-based resource discovery architecture is then proposed to search for the relevant resources over the Internet. In the making of an agent-based resource discovery architecture, two pivotal issues need to be addressed: resource description language (RDL) and its resource matchmaking mechanism. RDL provides a specification to publish and request for resources in environmental emergency situations, and matchmaking is the process of finding an appropriate resource for a request through a medium. In this paper, a possibilistic Petri net-based resource description language is proposed as an advanced RDL with four key features: possibilistic transitions to represent a resource or a request; input places to denote preconditions expected to hold before performing the resources; output places to denote postconditions expected to hold after performing the resources; possibility and necessity measures to quantify the confidence levels that an agent can provide the relevant resource for a request. A matchmaking mechanism, permitting a relaxed match for close semantics, is also developed to search for the possible resources among agents for a request. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A Fast Learning Control Strategy for Unmanned Aerial Manipulators We present an artificial intelligence-based control approach, the fusion of artificial neural networks and type-2 fuzzy logic controllers, namely type-2 fuzzy-neural networks, for the outer adaptive position controller of unmanned aerial manipulators. The performance comparison of proportional-derivative (PD) controller working alone and the proposed intelligent control structures working in parallel with a PD controller is presented. The simulation and real-time results show that the proposed online adaptation laws eliminate the need for precise tuning of conventional controllers by learning system dynamics and disturbances online. The proposed approach is also computationally inexpensive due to the implementation of the fast sliding mode control theory-based learning algorithm which does not require matrix inversions or partial derivatives. Both simulation and experimental results have shown that the proposed artificial intelligence-based learning controller is capable of reducing the root-mean-square error by around 50% over conventional PD and PID controllers. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Structural classification and relaxation matching of totally unconstrained handwritten zip-code numbers A system for recognizing totally unconstrained handwritten numerals is described. It comprises a feature extractor and two classification algorithms. The feature extractor decomposes the skeleton of a character into geometric primitives containing topological information of the character. These primitives consist of convex polygons and line segments, and features are generated from each primitive. The recognition process contains a fast structural classifier that identifies the majority of the samples, and a robust relaxation algorithm which classifies the rest of the data. The system was trained and tested on real-life handwritten ZIP codes. | Multiple classifier decision combination strategies for character recognition: A review. Two research strands, each identifying an area of markedly increasing importance in the current development of pattern analysis technology, underlie the review covered by this paper, and are drawn together to offer both a task-oriented and a fundamentally generic perspective on the discipline of pattern recognition. The first of these is the concept of decision fusion for high-performance pattern recognition, where (often very diverse) classification technologies, each providing complementary sources of information about class membership, can be integrated to provide more accurate, robust and reliable classification decisions. The second is the rapid expansion in technology for the automated analysis of (especially) handwritten data for OCR applications including document and form processing, pen-based computing, forensic analysis, biometrics and security, and many other areas, especially those which seek to provide online or offline processing of data which is available in a human-oriented medium. Classifier combination/multiple expert processing has a long history, but the sheer volume and diversity of possible strategies now available suggest that it is timely to consider a structured review of the field. Handwritten character processing provides an ideal context for such a review, both allowing engagement with a problem area which lends itself ideally to the performance enhancements offered by multi-classifier configurations, but also allowing a clearer focus to what otherwise, because of the unlimited application horizons, would be a task of unmanageable proportions. Hence, this paper explicitly reviews the field of multiple classifier decision combination strategies for character recognition, from some of its early roots to the present day. In order to give structure and a sense of direction to the review, a new taxonomy for categorising approaches is defined and explored, and this both imposes a discipline on the presentation of the material available and helps to clarify the mechanisms by which multi-classifier configurations deliver performance enhancements. The review incorporates a discussion both of processing structures themselves and a range of important related topics which are essential to maximise an understanding of the potential of such structures. Most importantly, the paper illustrates explicitly how the principles underlying the application of multi-classifier approaches to character recognition can easily generalise to a wide variety of different task domains. | A note on human recognition of hand-printed characters | Maris: map recognition input system A map recognition input system called MARIS is developed to digitize large-reduced-scale maps into a layered data form. This paper presents an experimental workstation, a vector-based recognition method, and an intelligent interaction function which are devised in order to enhance input speed. The recognition method is capable of extracting building lines, contour lines, and lines representing railways, roads and water areas. The recognition and the interaction utilize new efficient line tracing/tracking techniques. Experimental results show that the input time using MARIS can be reduced to about 25% of that of a system using a conventional interactive digitizer. | Optical Character Recognition - a Survey. | Numeral Recognition by Weighting Local Decisions This paper presents a new technique to improve thecombination of classification decisions obtained fromlocal analysis of patterns. Specifically, a geneticalgorithm is used to determine the optimal weight vectorto balance the local decisions in the combination process.The experimental results, carried out in the field ofhand-written numeral recognition, demonstrate theeffectiveness of the new technique. | Robust contour decomposition using a constant curvature criterion The problem of decomposing an extended boundary or contour into simple primitives is addressed with particular emphasis on Laplacian-of-Gaussian zero-crossing contours. A technique is introduced for partitioning such contours into constant curvature segments. A nonlinear 'blip' filter matched to the impairment signature of the curvature computation process, an overlapped voting scheme, and a sequential contiguous segment extraction mechanism are used. This technique is insensitive to reasonable changes in algorithm parameters and robust to noise and minor viewpoint-induced distortions in the contour shape, such as those encountered between stereo image pairs. The results vary smoothly with the data, and local perturbations induce only local changes in the result. Robustness and insensitivity are experimentally verified. | Robust detection of region boundaries in a sequence of images The problem of region recognition in a sequence of images is addressed, and a recognition system that finds and tracks region-of-interest boundaries in those images is presented. These regions are not stationary: parts of the boundary may be missing or completely blurred and outliers are likely to exist. Thus, the emphasis is on robustification and efficiency. The region segmentation problem was formulated as a multihypothesis test that seeks the boundary that maximizes a performance criterion which is general in terms of blur and noise. Efficiency is obtained by restricting outline candidates to an adaptive search area near the optimal boundary from the previous section. The search for the maximum is cast into a fast first-order dynamic programming procedure. Robust statistical techniques are used in the multihypothesis test to reduce the sensitivity to outliers and unexpected noise. The inconsistent parts of the optimal boundary are then detected by using a robust expectation maximization algorithm and are interpolated from higher-quality parts. The boundary obtained by this method is used as the reference boundary for the next image | Edge-directed prediction for lossless compression of natural images This paper sheds light on the least-square (LS)-based adaptive prediction schemes for lossless compression of natural images. Our analysis shows that the superiority of the LS-based adaptation is due to its edge-directed property, which enables the predictor to adapt reasonably well from smooth regions to edge areas. Recognizing that LS-based adaptation improves the prediction mainly around the edge areas, we propose a novel approach to reduce its computational complexity with negligible performance sacrifice. The lossless image coder built upon the new prediction scheme has achieved noticeably better performance than the state-of-the-art coder CALIC with moderately increased computational complexity | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | A taxonomy for real-world modelling concepts A major component in problem analysis is to model the real world itself. However, the modelling languages suggested so far, suffer from several weaknesses, especially with respect to dynamics . First, dynamic modelling languages originally aimed at describing data—rather than real-world—processes. Moreover, they are either weak in expression, so that models become too vague to be meaningful, or they are cluttered with rigorous detail, which makes modelling unnecessarily complicated and inhibits the communication with end users. This paper establishes a simple and intuitive conceptual basis for the modelling of the real world, with an emphasis on dynamics. Object-orientation is not considered appropriate for this purpose, due to its focus on static object structure. Dataflow diagrams, on the other hand, emphasize dynamics, but unfortunately, some major conceptual deficiencies make DFDs, as well as their various formal extensions, unsuited for real-world modelling. This paper presents a taxonomy of concepts for real-world modelling which rely on some seemingly small, but essential modifications of the DFD language, Hence the well-known, communication-oriented diagrammatic representations of DFDs can be retained. It is indicated how the approach can support a smooth transition into later stages of object-oriented design and implementation. | Refinement and Continuous Behaviour Refinement Calculus is a formal framework for the development of provably correct software. It is used by Action Systems, a predicate transformer based framework for constructing distributed and reactive systems. Recently, Action Systems were extended with a new action called the differential action. It allows the modelling of continuous behaviour, such that Action Systems may model hybrid systems. In this paper we investigate how the differential action fits into the refinement framework. As the main result we develop simple laws for proving a refinement step involving continuous behaviour within the Refinement Calculus. | A taxonomy for the early stages of the software development life cycle Most researchers in the software engineering community use the term “requirements” to describe the initial stage of software development, and they define requirements to be a process of describing what , not how . However, the range of tools and techniques that are currently sold as requirements tools and techniques extends from aids for analysts asking potential customers appropriate questions about an existent problem to aids for defining algorithms for software modules. This paper presents a taxonomy of the early stages of the software development life cycle to enable prospective tool and technique users to understand what they are buying and to enable future toolsmiths and technique developers to uniquely categorize and characterize their product in comparison with others. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.036696 | 0.042046 | 0.034013 | 0.034013 | 0.01712 | 0.000246 | 0.00016 | 0.000017 | 0 | 0 | 0 | 0 | 0 | 0 |
Hazard Analysis in Formal Specification Action systems have proven their worth in the design of safetycritical systems. The approach is based on a firm mathematical foundation within which the reasoning about the correctness and behaviour of the system under development is carried out. Hazard analysis is a vital part of the development of safety-critical systems. The results of the hazard analysis are semantically different from the specification terms of the controlling software. The purpose of this paper is to show how we can incorporate the results of hazard analysis into an action system specification by encoding this information via available composition operators for action systems in order to specify robust and safe controllers. | Design Templates for Collective Behavior While sequential behavior of single objects is fairly well understood, orchestrating the collective behavior emerging from the behaviors of individual objects continues to be a challenging task. This is especially true for distributed reactive systems. The joint action paradigm is a design methodology that concentrates on the collective behavior of objects. Aspects of collective behavior are gradually introduced in a controlled manner in a specification. This paper presents how such aspects can be archived as generic templates, and instantiated in such a way that formal properties verified for a template become properties of its application. Both design and verification effort are reused when a template is applied. | Incremental Specification with Joint Actions: The RPC-Memory Specification Problem Solutions to the RPC-Memory Specification Problem are developed incrementally, using an object-oriented modeling formalism with multi-object actions. Incrementality is achieved by superposition-based derivation steps that make effective use of multiple inheritance and specialization of inherited actions. Each stage models collective behaviors of objects at some level of abstraction, and the preservation of all safety properties is guaranteed in each step. The aim of the approach is to support a design methodology that combines operational intuition with formal reasoning in TLA and is suited for the use of animation tools. | Action-Based Concurrency and Synchronization for Objects We extend the Action-Oberon language for executing action systems with type-bound actions. Type-bound actions combine the concepts of type-bound procedures (methods) and actions, bringing object orientation to action systems. Type-bound actions are created at runtime along with the objects of their bound types. They permit the encapsulation of data and code in objects. Allowing an action to have more than one participant gives us a mechanism for expressing n-ary communication between objects. By showing how type-bound actions can logically be reduced to plain actions, we give our extension a firm foundation in the Refinement Calculus. | Operational specification with joint actions: serializable databases Joint actions are introduced as a language basis for operational specification of reactive systems. Joint action systems are closed systems with no communication primitives. Their nondeterministic execution model is based on multi-party actions without an explicit control flow, and they are amenable for stepwise derivation by superposition. The approach is demonstrated by deriving a specification for serializable databases in simple derivation steps. Two different implementation strategies are imposed on this as further derivations. One of the strategies is two-phase locking, for which a separate implementation is given and proved correct. The other is multiversion timestamp ordering, for which the derivation itself is an implementation. | Specifying the Caltech asynchronous microprocessor The action systems framework for modelling parallel programs is used to formally specify a microprocessor. First the microprocessor is specified as a sequential program. The sequential specification is then decomposed and refined into a concurrent program using correctness-preserving program transformations. Previously this microprocessor has been specified at Caltech, where an asynchronous circuit for the microprocessor was derived from the specification. We propose a specification strategy that is based on the idea of spatial decomposition of the program variable space. | Object-oriented specification of reactive systems A novel approach to the operational specification of concurrent systems that leads to an object-oriented specification language is presented. In contrast to object-oriented programming languages, objects are structured as hierarchical state-transition systems, methods of individual objects are replaced by roles in cooperative multiobject actions whereby explicit mechanisms for process communication are avoided, and a simple nondeterministic execution model that requires no explicit invocation of actions is introduced. The approach has a formal basis, and it emphasizes structured derivation of specifications. Top-down and bottom-up methodologies are reflected in two variants of inheritance. The former captures the methodology of designing distributed systems by superimposition; the latter is suited to the specification of reusable modules | List processing in real time on a serial computer A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical implementations of list processing systems lack this property because allocating a list cell from the heap may cause a garbage collection, which process requires time proportional to the heap size to finish. A real-time list processing system is presented which continuously reclaims garbage, including directed cycles, while linearizing and compacting the accessible cells into contiguous locations to avoid fragmenting the free storage pool. The program is small and requires no time-sharing interrupts, making it suitable for microcode. Finally, the system requires the same average time, and not more than twice the space, of a classical implementation, and those space requirements can be reduced to approximately classical proportions by compact list representation. Arrays of different sizes, a program stack, and hash linking are simple extensions to our system, and reference counting is found to be inferior for many applications. | Constraint logic programming for reasoning about discrete event processes The purpose of this paper is to show that constraint logic programming is a useful computational logic for modeling, simulating, and verifying real-time discrete event processes. The designer's knowledge about discrete event processes can be represented by a constraint logic program in a fashion that stays close to the mathematical definition of the processes, and can be used to semiautomate verification of possibly infinite-state systems. The constraint language CPL( R ) is used to illustrate verification techniques. | A generalization of Dijkstra's calculus Dijsktra's calculus of guarded commands can be generalized and simplified by dropping the law of the excluded miracle. This paper gives a self-contained account of the generalized calculus from first principles through the semantics of recursion. The treatment of recursion uses the fixpoint method from denotational semantics. The paper relies only on the algebraic properties of predicates; individual states are not mentioned (except for motivation). To achieve this, we apply the correspondence between programs and predicates that underlies predicative programming.The paper is written from the axiomatic semantic point of view, but its contents can be described from the denotational semantic point of view roughly as follows: The Plotkin-Apt correspondence between wp semantics and the Smyth powerdomain is extended to a correspondence between the full wp/wlp semantics and the Plotkin powerdomain extended with the empty set. | Towards the Proper Integration of Extra-Functional Requirements In spite of the many achievements in software engineering, proper treatment of extra-functional requirements (also known as non-functional requirements) within the software development process is still a challenge to our discipline. The application of functionality-biased software development methodologies can lead to major contradictions in the joint modelling of functional and extra-functional requirements. Based on a thorough discussion on the nature of extra-functional requirements as well as on open issues in coping with them, this paper emphasizes the role of extra-functional requirements in the software development process. Particularly, a framework supporting the explicit integration of extra-functional requirements into a conventional phase-driven process model is proposed and outlined. | Expressing the relationships between multiple views in requirements specification The authors generalize and formalize the definition of a ViewPoint to facilitate its manipulation for composite system development. A ViewPoint is defined to be a loosely-coupled, locally managed object encapsulating representation knowledge, development process knowledge and partial specification knowledge about a system and its domain. In attempting to integrate multiple requirements specification ViewPoints, overlaps must be identified and expressed, complementary participants made to interact and cooperate, and contradictions resolved. The notion of inter-ViewPoint communication is addressed as a vehicle for ViewPoint integration. The communication model presented straddles both the method construction stage during which inter-ViewPoint relationships are expressed, and the method application stage during which these relationships are enacted | A Tool For Task-Based Knowledge And Specification Acquisition Knowledge acquisition has been identified as the bottleneck for knowledge engineering. One of the reasons is the lack of an integrated methodology that is able to provide tools and guidelines for the elicitation of knowledge as well as the verification and validation of the system developed. Even though methods that address this issue have been proposed, they only loosely relate knowledge acquisition to the remaining part of the software development fife cycle. To alleviate this problem, we have developed a framework in which knowledge acquisition is integrated with system specifications to facilitate the verification, validation, and testing of the prototypes as well as the final implementation. To support the framework, we have developed a knowledge acquisition tool, TAME. It provides an integrated environment to acquire and generate specifications about the functionality and behavior of the target system, and the representation of the domain knowledge and domain heuristics. The tool and the framework, together, can thus enhance the verification, validation, and the maintenance of expert systems through their life cycles. (C) 1994 John Wiley & Sons, Inc. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.2 | 0.2 | 0.066667 | 0.05 | 0.033333 | 0.025 | 0.008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Towards a Compositional Approach to the Design and Verification of Distributed Systems We are investigating a component-based approach for formal design of distributed systems. In this paper, we introduce the framework we use for specification, composition and communication and we apply it to an example that highlights the different aspects of a compositional design, including top-down and bottom-up phases, proofs of composition, refinement proofs, proofs of program texts, and component reuse. | Data Refinement of Mixed Specifications . Using predicate transformers as a basis, we give semantics and refinement rules for mixed specifications that allow UNITY
style specifications to be written as a combination of abstract program and temporal properties. From the point of view of
the programmer, mixed specifications may be considered a generalization of the UNITY specification notation to allow safety
properties to be specified by abstract programs in addition to temporal properties. Alternatively, mixed specifications may
be viewed as a generalization of the UNITY programming notation to allow arbitrary safety and progress properties in a generalized
‘always section’. The UNITY substitution axiom is handled in a novel way by replacing it with a refinement rule. The predicate
transformers foundation allows known techniques for algorithmic and data-refinement for weakest precondition based programming
to be applied to both safety and progress properties. In this paper, we define the predicate transformer based specifications,
specialize the refinement techniques to them, demonstrate soundness, and illustrate the approach with a substantial example. | On the Relation Between Unity Properties and Sequences of States Stepwise refinement of programs has proven to be a suitable method for developing parallel and distributed programs. We examine and compare a number of different notions of program refinement for Unity. Two of these notions are based on execution sequences. Refinement corresponds to the reduction of the set of execution sequences, i.e. reducing the amount of nondeterminism. The other refinement notions are based on Unity properties as introduced by Chandy and Misra. The Unity approach is to refine specifications. Although it has proven a suitable formalism for deriving algorithms, it seems less suitable for handling implementation details. Following Sanders and Singh, we formalize program refinement in the Unity framework as the preservation of Unity properties. We show that Unity properties are not powerful enough to characterize execution sequences. As a consequence, the notion of property-preserving refinement differs from the notion of reducing the set of execution sequences. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Towards a Calculus of Data Refinement In this paper we lay a foundation for a calculus of data refinement. We introduce the concept of conditional data refinement which enables us to incorporate contextual information in a refinement step. We give a number of its properties and show in several examples how data refinement can be used in practice. | A calculus of refinements for program derivations A calculus of program refinements is described, to be used as a tool for the step-by-step derivation of correct programs. A derivation step is considered correct if the new program preserves the total correctness of the old program. This requirement is expressed as a relation of (correct) refinement between nondeterministic program statements. The properties of this relation are studied in detail. The usual sequential statement constructors are shown to be monotone with respect to this relation and it is shown how refinement between statements can be reduced to a proof of total correctness of the refining statement. A special emphasis is put on the correctness of replacement steps, where some component of a program is replaced by another component. A method by which assertions can be added to statements to justify replacements in specific contexts is developed. The paper extends the weakest precondition technique of Dijkstra to proving correctness of larger program derivation steps, thus providing a unified framework for the axiomatic, the stepwise refinement and the transformational approach to program construction and verification. | Hierarchical correctness proofs for distributed algorithms This thesis introduces a new model for distributed computation in asynchronous networks, the input-output automaton. This simple, powerful model captures in a novel way the game-theoretical interaction between a system and its environment, and allows fundamental properties of distributed computation such as fair computation to be naturally expressed. Furthermore, this model can be used to construct modular, hierarchical correctness proofs of distributed algorithms. This thesis defines the input-output automaton model, and presents an interesting example of how this model can be used to construct such proofs. | An example of stepwise refinement of distributed programs: quiescence detection We propose a methodology for the development of concurrent programs and apply it to an important class of problems: quiescence detection. The methodology is based on a novel view of programs. A key feature of the methodology is the separation of concerns between the core problem to be solved and details of the forms of concurrency employed in the target architecture and programming language. We begin development of concurrent programs by ignoring issues dealing with concurrency and introduce such concerns in manageable doses. The class of problems solved includes termination and deadlock detection. | Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies. | Unifying execution of imperative and declarative code We present a unified environment for running declarative specifications in the context of an imperative object-Oriented programming language. Specifications are Alloy-like, written in first-order relational logic with transitive closure, and the imperative language is Java. By being able to mix imperative code with executable declarative specifications, the user can easily express constraint problems in place, i.e., in terms of the existing data structures and objects on the heap. After a solution is found, the heap is updated to reflect the solution, so the user can continue to manipulate the program heap in the usual imperative way. We show that this approach is not only convenient, but, for certain problems can also outperform a standard imperative implementation. We also present an optimization technique that allowed us to run our tool on heaps with almost 2000 objects. | Non-interference through determinism The standard approach to the specification of a secure system is to present a (usually state-based) abstract security model separately from the specification of the system's functional requirements, and establishing a correspondence between the two specifications. This complex treatment has resulted in development methods distinct from those usually advocated for general applications.We provide a novel and intellectually satisfying formulation of security properties in a process algebraic framework, and show that these are preserved under refinement. We relate the results to a more familiar state-based (Z) specification methodology. There are efficient algorithms for verifying our security properties using model checking. | SADT<supscrpt>@@@@</supscrpt> /SAINT: Large scale analysis simulation methodology SADT/SAINT is a highly structured, top-down simulation methodology for defining, analyzing, communicating, and documenting large-scale systems. Structured Analysis and Design Technique (SADT), developed by SofTech, provides a functional representation and a data model of the system that is used to define and communicate the system. System Analysis of Integrated Networks of Tasks (SAINT), currently used by the USAF, is a simulation technique for designing and analyzing man-machine systems but is applicable to a wide range of systems. By linking SADT with SAINT, large-scale systems can be defined in general terms, decomposed to the necessary level of detail, translated into SAINT nomenclature, and implemented into the SAINT program. This paper describes the linking of SADT and SAINT resulting in an enhanced total simulation capability that integrates the analyst, user, and management. | On Teaching Visual Formalisms A graduate course on visual formalisms for reactive systems emphasized using such languages for not only specification and requirements but also (and predominantly) actual execution. The course presented two programming approaches: an intra-object approach using statecharts and an interobject approach using live sequence charts. Using each approach, students built a small system of their choice and then combined the two systems. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.2496 | 0.2496 | 0.017659 | 0.008634 | 0.003836 | 0.000569 | 0.000118 | 0.000003 | 0 | 0 | 0 | 0 | 0 | 0 |
Towards a Unified Development Methodology for Shared-Variable Parallel and Distributed Programs A formal framework for the design of distributed, message-passing programs from shared-variable parallel programs is presented. Based on a uniform semantic model for both paradigms and a trace-based refinement calculus, we show how a shared-variable parallel program can be refined into a distributed program. The calculus is used to introduce iteration, parallelism, and local channels, to replace access to shared variables by message-passing primitives, and to update the channels such that processes find the expected information on the expected channels at the right time. The methodology is illustrated with the development of a distributed implementation of an all-pair, shortest-paths algorithm. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Converging on the optimal attainment of requirements Planning for the optimal attainment of requirements is an important early lifecycle activity. However, such planning is difficult when dealing with competing requirements, limited resources, and the incompleteness of information available at requirements time. A novel approach to requirements optimization is described. A requirements interaction model is executed to randomly sample the space of options. This produces a large amount of data, which is then condensed by a summarization tool. The result is a small list of critical decisions (i.e., those most influential in leading towards the desired optimum). This focuses human experts' attention on a relatively few decisions and makes them aware of major alternatives. This approach is iterative. Each iteration allows experts to select from among the major alternatives. In successive iterations the execution and summarization modules are run again, but each time further constrained by the decisions made in previous iteration. In the case study shown here, out of 99 yes/no decisions (approximately 1030 possibilities), five iterations were sufficient to find and make the 30 key ones. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web. | Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Hierarchical object nets—a methodology for graphical modeling of discrete event systems | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Digital Watermarking in Telemedicine Applications - Towards Enhanced Data Security and Accessibility Implementing telemedical solutions has become a trend amongst the various research teams at an international level. Yet, contemporary information access and distribution technologies raise critical issues that urgently need to be addressed, especially those related to security. The paper suggests the use of watermarking in telemedical applications in order to enhance security of the transmitted sensitive medical data, familiarizes the users with a telemedical system and a watermarking module that have already been developed, and proposes an architecture that will enable the integration of the two systems, taking into account a variety of use cases and application scenarios. | A secure fragile watermarking scheme based on chaos-and-hamming code In this work, a secure fragile watermarking scheme is proposed. Images are protected and any modification to an image is detected using a novel hybrid scheme combining a two-pass logistic map with Hamming code. For security purposes, the two-pass logistic map scheme contains a private key to resist the vector quantization (VQ) attacks even though the embedding scheme is block independent. To ensure image integrity, watermarks are embedded into the to-be-protected images which are generated using Hamming code technique. Experimental results show that the proposed scheme has satisfactory protection ability and can detect and locate various malicious tampering via image insertion, erasing, burring, sharpening, contrast modification, and even though burst bits. Additionally, experiments prove that the proposed scheme successfully resists VQ attacks. | Image encryption using the two-dimensional logistic chaotic map Chaos maps and chaotic systems have been proved to be useful and effective for cryptography. In our study, the two-dimensional logistic map with complicated basin structures and attractors are first used for image encryption. The proposed method adopts the classic framework of the permutation-substitution network in cryptography and thus ensures both confusion and diffusion properties for a secure cipher. The proposed method is able to encrypt an intelligible image into a random-like one from the statistical point of view and the human visual system point of view. Extensive simulation results using test images from the USC-SIPI image database demonstrate the effectiveness and robustness of the proposed method. Security analysis results of using both the conventional and the most recent tests show that the encryption quality of the proposed method reaches or excels the current state-of-the-art methods. Similar encryption ideas can be applied to digital data in other formats (e.g., digital audio and video). We also publish the cipher MATLAB open-source-code under the web page https://sites.google.com/site/tuftsyuewu/source-code. (c) 2012 SPIE and IS&T. [DOI: 10.1117/1.JEI.21.1.013014] | CRT-based fragile self-recovery watermarking scheme for image authentication and recovery Fragile watermarking is one of the effective techniques for authentication of digital documents and images. However, recovering the content of the tampered region in a watermarked image is a challenging task while considering conflicting criteria of imperceptibility and watermark embedding capacity. In this paper we propose a Chinese remainder theorem (CRT)-based watermarking scheme which can recover the original contents in the tampered region of the digital content while maintaining imperceptibility criterion. High peak signal to noise ratio (PSNR) and large watermark capacity can be achieved by using the CRT-based embedding scheme. Since only modular operations are involved in computation of the CRT-based technique, it provides computational advantage as it involves only modular arithmetic. Besides, CRT-based technique introduces additional security to the watermarking scheme. By taking several digital images, we have shown that the proposed technique can recover the tampered contents effectively. We have also considered forgery detection on a digital cheque, eCheque, and shown that the proposed technique can detect and recover the original content from the forged cheque. | An adjustable-purpose image watermarking technique by particle swarm optimization. Imperceptibility, security, capacity, and robustness are among many aspects of image watermarking design. An ideal watermarking system should embed a large amount of information perfectly securely, but with no visible degradation to the host image. Many researchers have geared efforts towards developing specific techniques for variant applications. In this paper, we propose an adjustable-purpose, reversible and fragile watermarking scheme for image watermarking by particle swarm optimization (PSO). In general, given any host image and watermark, our scheme can provide an optimal watermarking solution. First, the content of a host image is analyzed to extract significant regions of interest (ROIs) automatically. The remaining regions of non-interest (RONIs) are collated for embedding watermarks by different amounts of bits determined by PSO to achieve optimal watermarking. The parameters can be adjusted relying upon user’s watermarking purposes. Experimental results show that the proposed technique has accomplished higher capacity and higher PSNR (peak signal-to-noise ratio) watermarking. | Iris based secure NROI multiple eye image watermarking for teleophthalmology. This paper presents a new secure multiple text and image watermarking scheme on cover eye image using fusion of discrete wavelet transforms (DWT) and singular value decomposition (SVD) for Teleophthalmology. Secure Hash Algorithm (SHA-512) is used for generating hash corresponding to iris part of the cover digital eye image and this unique hash parameter is used for enhancing the security feature of the proposed watermarking technique. Simultaneous embedding of four different watermarks (i.e. Signature, index, caption and reference watermark) in form of image and text using fusion of discrete wavelet transforms (DWT) and singular value decomposition (SVD) is achieved in this paper. The suggested technique initially divides the digital eye image into Region of interest (ROI) containing iris and Non-Region of interest (NROI) part where the text and image watermarks are embedded into the Non-Region of interest (NROI) part of the DWT cover image. The selection of DWT decomposition level for embedding the text and image watermarks depends on size, different characteristics and robustness requirements of medical watermark. The performance in terms of Normalized Correlation (NC) and bit error rate (BER) of the developed scheme is evaluated and analyzed against known signal processing attacks and `Checkmark' attacks. The method is found to be robust against all the considered attacks. The proposed multilevel watermarking method correctly extracts the embedded watermarks without error and is robust against the all considered attacks without significant degradation of the medical image quality of the watermarked image. Therefore the proposed method may find potential application in secure and compact medical data transmission for teleophthalmology applications. | Effectiveness of virtual reality-based instruction on students' learning outcomes in K-12 and higher education: A meta-analysis The purpose of this meta-analysis is to examine overall effect as well as the impact of selected instructional design principles in the context of virtual reality technology-based instruction (i.e. games, simulation, virtual worlds) in K-12 or higher education settings. A total of 13 studies (N = 3081) in the category of games, 29 studies (N = 2553) in the category of games, and 27 studies (N = 2798) in the category of virtual worlds were meta-analyzed. The key inclusion criteria were that the study came from K-12 or higher education settings, used experimental or quasi-experimental research designs, and used a learning outcome measure to evaluate the effects of the virtual reality-based instruction. Results suggest games (FEM = 0.77; REM = 0.51), simulations (FEM = 0.38; REM = 0.41), and virtual worlds (FEM = 0.36; REM = 0.41) were effective in improving learning outcome gains. The homogeneity analysis of the effect sizes was statistically significant, indicating that the studies were different from each other. Therefore, we conducted moderator analysis using 13 variables used to code the studies. Key findings included that: games show higher learning gains than simulations and virtual worlds. For simulation studies, elaborate explanation type feedback is more suitable for declarative tasks whereas knowledge of correct response is more appropriate for procedural tasks. Students performance is enhanced when they conduct the game play individually than in a group. In addition, we found an inverse relationship between number of treatment sessions learning gains for games. With regards to the virtual world, we found that if students were repeatedly measured it deteriorates their learning outcome gains. We discuss results to highlight the importance of considering instructional design principles when designing virtual reality-based instruction. | A compile-time scheduling heuristic for interconnection-constrained heterogeneous processor architectures The authors present a compile-time scheduling heuristic called dynamic level scheduling, which accounts for interprocessor communication overhead when mapping precedence-constrained, communicating tasks onto heterogeneous processor architectures with limited or possibly irregular interconnection structures. This technique uses dynamically-changing priorities to match tasks with processors at each step, and schedules over both spatial and temporal dimensions to eliminate shared resource contention. This method is fast, flexible, widely targetable, and displays promising performance | Algebraic tools for the performance evaluation of discrete event systems In this paper, it is shown that a certain class of Petri nets called event graphs can be represented as linear "time-invariant" flnite-dimensional sys- tems using some particular algebras. This sets the ground on which a theory of these systems can be developped in a manner which is very analogous to that of conventional linear system theory. Part 2 of the paper is devoted to showing some preliminary basic developments in that direction. Indeed, there are several ways in which one can consider event graphs as linear sys- tems: these ways correspond to approaches in the time domain, in the event domain and in a two-dimensional domain. In each of these approaches, a difierent algebra has to be used for models to remain linear. However, the common feature of these algebras is that they all fall into the axiomatic deflnition of "dioids". Therefore, Part 1 of the paper is devoted to a unifled presentation of basic algebraic results on dioids. | Abstract interpretation of reactive systems The advent of ever more complex reactive systems in increasingly critical areas calls for the development of automated verication techniques. Model checking is one such technique, which has proven quite successful. However, the state-explosion problem remains a major stumbling block. Recent experience indicates that solutions are to be found in the application of techniques for property-preserving abstraction and successive approximation of models. Most such applications have so far been based solely on the property-preserving characteristics of simulation relations. A major drawback of all these results is that they do not oer a satisfactory formalization of the notion of precision of abstractions. The theory of Abstract Interpretation oers a framework for the denition and justication of property-preserving abstractions. Furthermore, it provides a method for the eective computation of abstract models directly from the text of a program, thereby avoiding the need for intermediate storage of a full-blown model. Finally, it formalizes the notion of optimality, while allowing to trade precision for speed by computing suboptimal approximations. For a long time, applications of Abstract Interpretation have mainly focused on the analysis of universal safety properties, i.e., properties that hold in all states along every possible execution path. In this article, we extend Abstract Interpretation to the analysis of both existential and universal reactive properties, as expressible in the modal -calculus .I t is shown how abstract models may be constructed by symbolic execution of programs. A notion of approximation between abstract models is dened while conditions are given under which optimal models can be constructed. Examples are given to illustrate this. We indicate conditions under which also falsehood of formulae is preserved. Finally, we compare our approach to those based on simulation relations. | A marriage of rely/guarantee and separation logic In the quest for tractable methods for reasoning about concurrent algorithms both rely/guarantee logic and separation logic have made great advances. They both seek to tame, or control, the complexity of concurrent interactions, but neither is the ultimate approach. Relyguarantee copes naturally with interference, but its specifications are complex because they describe the entire state. Conversely separation logic has difficulty dealing with interference, but its specifications are simpler because they describe only the relevant state that the program accesses. We propose a combined system which marries the two approaches. We can describe interference naturally (using a relation as in rely/guarantee), and where there is no interference, we can reason locally (as in separation logic). We demonstrate the advantages of the combined approach by verifying a lock-coupling list algorithm, which actually disposes/frees removed nodes. | Levelled Entity Relationship Model The Entity-Relationship formalism, introduced in the mid-seventies,is an extensively used tool for database design. The database communityis now involved in building the next generation of databasesystems. However, there is no effective formalism similar to ER formodeling the complex data in these systems. We propose the LeveledEntity Relationship (LER) formalism as a step towards fulfilling sucha need.An essential characteristic of these next-generation systems is thata data element is ... | Unifying Theories of Parallel Programming We are developing a shared-variable refinement calculus in the style of the sequential calculi of Back, Morgan, and Morris. As part of this work, we're studying different theories of shared-variable programming. Using the concepts and notations of Hoare & He's unifying theories of programming (UTP), we give a formal semantics to a programming language that contains sequential composition, conditional statements, while loops, nested parallel composition, and shared variables. We first give a UTP semantics to labelled action systems, and then use this to give the semantics of our programs. Labelled action systems have a unique normal form that allows a simple formalisation and validation of different logics for reasoning about shared-variable programs. In this paper, we demonstrate how this is done for Lamport's Concurrent Hoare Logic. | Reversible data hiding by adaptive group modification on histogram of prediction errors. In this work, the conventional histogram shifting (HS) based reversible data hiding (RDH) methods are first analyzed and discussed. Then, a novel HS based RDH method is put forward by using the proposed Adaptive Group Modification (AGM) on the histogram of prediction errors. Specifically, in the proposed AGM method, multiple bins are vacated based on their magnitudes and frequencies of occurrences by employing an adaptive strategy. The design goals are to maximize hiding elements while minimizing shifting and modification elements to maintain image high quality by giving priority to the histogram bins utilized for hiding. Furthermore, instead of hiding only one bit at a time, the payload is decomposed into segments and each segment is hidden by modifying a triplet of prediction errors to suppress distortion. Experimental results show that the proposed AGM technique outperforms the current state-of-the-art HS based RDH methods. As a representative result, the proposed method achieves an improvement of 4.30 dB in terms of PSNR when 105,000 bits are hidden into the test Lenna image. | 1.111 | 0.101667 | 0.101667 | 0.101667 | 0.101667 | 0.049603 | 0.000833 | 0.000083 | 0 | 0 | 0 | 0 | 0 | 0 |
Adaptive event-triggered control of a class of nonlinear networked systems. This paper investigates an adaptive event-triggered communication scheme (AETCS) for a class of networked Takagi–Sugeno (T–S) fuzzy control systems. The threshold of event-triggering condition has great influence on the maximum allowable number of successive packet losses. Different from the conventional method, the threshold, in this study, is dependent on a novel adaptive law which can be achieved on-line rather than a predefined constant, since the threshold with fixed value is hard to suit the variation of the system. The stability and stabilization criteria are derived by using a new Lyapunov function. Finally, an example is provided to demonstrate the design method. | Networked control system with asynchronous samplings and quantizations in both transmission and receiving channels. This study addresses a problem of the controlling networked control systems (NCSs) which is consisted of the continuous-time plant and controller. In both transmission and receiving channels, asynchronous sampling and different logarithmic quantization effects are considered. By categorizing three cases of asynchronous sampling and using two properties of quantizer which are sector bounded and convex combination, sufficient conditions of the existence of desired controllers for each asynchronous case are presented in the form of linear matrix inequalities (LMIs). Simulation results are given to illustrate the validity of the proposed methods. | Event-triggered leader-following consensus for multi-agent systems with semi-Markov switching topologies. This paper investigates the event-triggered leader-following consensus problem for a multi-agent system with semi-Markov switching topologies. A sampled-data-based event-triggered transmission scheme is introduced to reduce unnecessary communication. By modeling the switching of network topologies by a semi-Markov process and adopting an event-triggered transmission scheme, a new consensus protocol is proposed. Compared with the traditional Markovian switching topologies, the transition rates in the semi-Markov switching topologies are time-varying, which is more general and practicable. Through utilization of an appropriate Lyapunov–Krasovskii functional, some sufficient conditions are derived, which guarantee that the leader-following consensus can be achieved in mean-square sense. Moreover, the consensus gain matrices and parameter of the event-triggered scheme can be efficiently solved out. Finally, a numerical example illustrates the effectiveness of the proposed design technique. | Event-Triggered Fault Detection Filter Design for a Continuous-Time Networked Control System. This paper studies the problem of event-triggered fault detection filter (FDF) and controller coordinated design for a continuous-time networked control system (NCS) with biased sensor faults. By considering sensor-to-FDF network-induced delays and packet dropouts, which do not impose a constraint on the event-triggering mechanism, and proposing the simultaneous network bandwidth utilization ratio... | Survey on Recent Advances in Networked Control Systems. Networked control systems (NCSs) are systems whose control loops are closed through communication networks such that both control signals and feedback signals can be exchanged among system components (sensors, controllers, actuators, and so on). NCSs have a broad range of applications in areas such as industrial control and signal processing. This survey provides an overview on the theoretical dev... | Auxiliary function-based integral inequalities for quadratic functions and their applications to time-delay systems Finding integral inequalities for quadratic functions plays a key role in the field of stability analysis. In such circumstances, the Jensen inequality has become a powerful mathematical tool for stability analysis of time-delay systems. This paper suggests a new class of integral inequalities for quadratic functions via intermediate terms called auxiliary functions, which produce more tighter bounds than what the Jensen inequality produces. To show the strength of the new inequalities, their applications to stability analysis for time-delay systems are given with numerical examples. | Improved delay-range-dependent stability criteria for linear systems with time-varying delays This paper is concerned with the stability analysis of linear systems with time-varying delays in a given range. A new type of augmented Lyapunov functional is proposed which contains some triple-integral terms. In the proposed Lyapunov functional, the information on the lower bound of the delay is fully exploited. Some new stability criteria are derived in terms of linear matrix inequalities without introducing any free-weighting matrices. Numerical examples are given to illustrate the effectiveness of the proposed method. | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | Guarded commands, nondeterminacy and formal derivation of programs So-called “guarded commands” are introduced as a building block for alternative and repetitive constructs that allow nondeterministic program components for which at least the activity evoked, but possibly even the final state, is not necessarily uniquely determined by the initial state. For the formal derivation of programs expressed in terms of these constructs, a calculus will be be shown. | Towards a Compositional Approach to the Design and Verification of Distributed Systems We are investigating a component-based approach for formal design of distributed systems. In this paper, we introduce the framework we use for specification, composition and communication and we apply it to an example that highlights the different aspects of a compositional design, including top-down and bottom-up phases, proofs of composition, refinement proofs, proofs of program texts, and component reuse. | Language Constructs for Data Partitioning and Distribution This article presents a survey of language features for distributed memory multiprocessor systems (DMMs), in particular, systems that provide features for data partitioning and distribution. In these systems the programmer is freed from consideration of the low-level details of the target architecture in that there is no need to program explicit processes or specify interprocess communication. Programs are written according to the shared memory programming paradigm but the programmer is required to specify, by means of directives, additional syntax or interactive methods, how the data of the program are decomposed and distributed. | Some Finite-Graph Models for Process Algebra Without Abstract | An Object-Oriented Extension to PEARL90 This paper presents an object-oriented extension to the real-time programming language PEARL. The new language preserves PEARL's expressiveness for timeliness and industrial processes and, at same time, improves the language's readability and manageability (through the better encapsulation paradigm derived from the object concept). Moreover the resulting object model allows the definition of inter and intra object parallelism in a transparent and simple way. Besides that, some extensions are also proposed to enhance testability and safety-related aspects of the language, such as the enforcement of a deterministic temporal behaviour. | Use of symmetry in prediction-error field for lossless compression of 3D MRI images Abstract Three dimensional MRI images which are powerful tools for diagnosis of many diseases require large storage space. A number of lossless compression schemes exist for this purpose. In this paper we propose a new approach for lossless compression of these images which exploits the inherent symmetry that exists in 3D MRI images. First, an efficient pixel prediction scheme is used to remove correlation between pixel values in an MRI image. Then a block matching routine is employed to take advantage of the symmetry within the prediction error image. Inter-slice correlations are eliminated using another block matching. Results of the proposed approach are compared with the existing standard compression techniques. | 1.0525 | 0.06 | 0.025 | 0.016667 | 0.003846 | 0.00069 | 0.000023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A proposed object-oriented development methodology The object-oriented approach to software engineering is maturing and evolving as industry methodologists strive to clarify and promote its underlying principles. The object-oriented paradigm has the potential to increase consistency within the software development process compared with previous software engineering approaches. Much of the current work, however, tends to focus on a particular phase without addressing the transition and traceability between phases. The methodology presented in this paper is proposed for the full development life-cycle. It synthesises and enhances several emerging object-oriented techniques and notations into a consistent approach. This methodology was developed to provide a framework for using object-oriented techniques in the development of a large simulation and prototyping laboratory. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Improved stability conditions for uncertain neutral-type systems with time-varying delays AbstractThis paper investigates the robust stability problem for a class of uncertain neutral-type delayed systems. The systems under consideration contain parameter uncertainties and time-varying delays. We aim at designing less conservative robust stability criteria for such systems. A new second-order reciprocally convex inequality is first proposed in order to deal with double integral terms. Then, by constructing a new Lyapunov– Krasovskii functional and employing the improved Wirtinger-based integral inequality and the reciprocally convex combination approaches, novel stability criteria are obtained. Moreover, the stability conditions for standard time-delay system are obtained as by-product results. Comparisons in three numerical examples illustrate the effectiveness of our results. | Robust Delay-Dependent Stability Criteria for Time-Varying Delayed Lur'e Systems of Neutral Type This paper deals with the problem of the robust delay-dependent stability of uncertain Lur'e systems with neutral-type time-varying delays. By constructing a set of Lyapunov---Krasovskii functional, less conservative robust stability criteria are derived in terms of linear matrix inequalities. The contribution in reduced conservation of the proposed stability criteria relies on the reciprocally convex method and Wirtinger inequality, which provides tighter upper bound than Jensen inequality. Three numerical examples are provided to show the effectiveness of the proposed method. | Stability analysis of time-delay systems via free-matrix-based double integral inequality. Based on the free-weighting matrix and integral-inequality methods, a free-matrix-based double integral inequality is proposed in this paper, which includes the Wirtinger-based double integral inequality as a special case. By introducing some free matrices into the inequality, more freedom can be provided in bounding the quadratic double integral. The connection of the new integral inequality and Wirtinger-based double one is well described, which gives a sufficient condition for the application of the new inequality to be less conservative. Furthermore, to investigate the effectiveness of the proposed inequality, a new delay-dependent stability criterion is derived in terms of linear matrix inequalities. Numerical examples are given to demonstrate the advantages of the proposed method. | New results on stability analysis for systems with discrete distributed delay The integral inequality technique is widely used to derive delay-dependent conditions, and various integral inequalities have been developed to reduce the conservatism of the conditions derived. In this study, a new integral inequality was devised that is tighter than existing ones. It was used to investigate the stability of linear systems with a discrete distributed delay, and a new stability condition was established. The results can be applied to systems with a delay belonging to an interval, which may be unstable when the delay is small or nonexistent. Three numerical examples demonstrate the effectiveness and the smaller conservatism of the method. | On Overview of KRL, a Knowledge Representation Language | Integrating noninterfering versions of programs The need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. To date, the only available tools for assisting with program integration are variants of text-based differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated.This paper concerns the design of a semantics-based tool for automatically integrating program versions. The main contribution of the paper is an algorithm that takes as input three programs A, B, and Base, where A and B are two variants of Base. Whenever the changes made to Base to create A and B do not “interfere” (in a sense defined in the paper), the algorithm produces a program M that integrates A and B. The algorithm is predicated on the assumption that differences in the behavior of the variant programs from that of Base, rather than differences in the text, are significant and must be preserved in M. Although it is undecidable whether a program modification actually leads to such a difference, it is possible to determine a safe approximation by comparing each of the variants with Base. To determine this information, the integration algorithm employs a program representation that is similar (although not identical) to the dependence graphs that have been used previously in vectorizing and parallelizing compilers. The algorithm also makes use of the notion of a program slice to find just those statements of a program that determine the values of potentially affected variables.The program-integration problem has not been formalized previously. It should be noted, however, that the integration problem examined here is a greatly simplified one; in particular, we assume that expressions contain only scalar variables and constants, and that the only statements used in programs are assignment statements, conditional statements, and while-loops. | Object-oriented development in an industrial environment Object-oriented programming is a promising approach to the industrialization of the software development process. However, it has not yet been incorporated in a development method for large systems. The approaches taken are merely extensions of well-known techniques when 'programming in the small' and do not stand on the firm experience of existing developments methods for large systems. One such technique called block design has been used within the telecommunication industry and relies on a similar paradigm as object-oriented programming. The two techniques together with a third technique, conceptual modeling used for requirement modeling of information systems, have been unified into a method for the development of large systems. | Optimal, efficient, recursive edge detection filters The design of an optimal, efficient, infinite-impulse-response (IIR) edge detection filter is described. J. Canny (1986) approached the problem by formulating three criteria designed in any edge detection filter: good detection, good localization, and low spurious response. He maximized the product of the first two criteria while keeping the spurious response criterion constant. Using the variational approach, he derived a set of finite extent step edge detection filters corresponding to various values of the spurious response criterion, approximating the filters by the first derivative of a Gaussian. A more direct approach is described in this paper. The three criteria are formulated as appropriate for a filter of infinite impulse response, and the calculus of variations is used to optimize the composite criteria. Although the filter derived is also well approximated by first derivative of a Gaussian, a superior recursively implemented approximation is achieved directly. The approximating filter is separable into two linear filters operating in two orthogonal directions allowing for parallel edge detection processing. The implementation is very simple and computationally efficient | Design problem solving: a task analysis I propose a task structure for design by analyzing a general class of methods that I call propose- critique-modify methods. The task structure is constructed by identifying a range of methods for each task. For each method, the knowledge needed and the subtasks that it sets up are iden- tified. This recursive style of analysis provides a framework in which we can understand a number of particular proposals for design prob- lem solving as specific combinations of tasks, methods, and subtasks. Most of the subtasks are not really specific to design as such. The analy- sis shows that there is no one ideal method for design, and good design problem solving is a result of recursively selecting methods based on a number of criteria, including knowledge avail- ability. How the task analysis can help in knowledge acquisition and system design is dis- cussed. | WebWork: METEOR2's Web-Based Workflow Management System. METEOR workflow management systems consist of both (1) design/build-time and (2) run-time/enactment components for implementing workflow applications. An enactment system provides the command, communication and control for the individual tasks in the workflow. Tasks are the run-time instances of intra- or inter-enterprise applications. We are developing three implementations of the METEOR model: WebWork, OrbWork and NeoWork. This paper discusses WebWork, an implementation relying solely on Web technology as the infrastructure for the enactment system. WebWork supports a distributed implementation with participation of multiple Web servers. It also supports automatic code generation of workflow applications from design specifications produced by a comprehensive graphical designer. WebWork has been developed as a complement of its more heavyweight counterparts (OrbWork and NeoWork), with the goal of providing ease of workflow application development, installation, use and maintenance. At the time of this writing, WebWork has been installed by several of the LSDIS Lab's industrial partners for testing, evaluation and building workflow applications. | The Conical Methodology and the evolution of simulation model development Originating with ideas generated in the mid-1970s, the Conical Methodology (CM) is the oldest procedural approach to simulation model development. This evolutionary overview describes the principles underlying the CM, the environment structured according to these principles, and the capabilities for large complex simulation modeling tasks not provided in textbook descriptions. The CM is an object-oriented, hierarchical specification language that iteratively prescribes object attributes in a definitional phase that is topdown, followed by a specification phase that is bottom-up. The intent is to develop successive model representations at various levels of abstraction that can be diagnosed for correctness, completeness, consistency, and other characteristics prior to implementation as an executable program. Related or competitive approaches, throughout the evolutionary period are categorized as emanating from: artificial intelligence, mathematical programming, software engineering, conceptual modeling, systems theory, logic-based theory, or graph theory. Work in each category is briefly described. | Visual Query Systems for Databases: A Survey Visual query systems (VQSs) are query systems for databases that use visual representations to depict the domain of interest and express related requests. VQSs can be seen as an evolution of query languages adopted into database management systems; they are designed to improve the effectiveness of the human–computer communication. Thus, their most important features are those that determine the nature of the human–computer dialogue. In order to survey and compare existing VQSs used for querying traditional databases, we first introduce a classification based on such features, namely the adopted visual representations and the interaction strategies. We then identify several user types and match the VQS classes against them, in order to understand which kind of system may be suitable for each kind of user. We also report usability experiments which support our claims. Finally, some of the most important open problems in the VQS area are described. | A Tool For Task-Based Knowledge And Specification Acquisition Knowledge acquisition has been identified as the bottleneck for knowledge engineering. One of the reasons is the lack of an integrated methodology that is able to provide tools and guidelines for the elicitation of knowledge as well as the verification and validation of the system developed. Even though methods that address this issue have been proposed, they only loosely relate knowledge acquisition to the remaining part of the software development fife cycle. To alleviate this problem, we have developed a framework in which knowledge acquisition is integrated with system specifications to facilitate the verification, validation, and testing of the prototypes as well as the final implementation. To support the framework, we have developed a knowledge acquisition tool, TAME. It provides an integrated environment to acquire and generate specifications about the functionality and behavior of the target system, and the representation of the domain knowledge and domain heuristics. The tool and the framework, together, can thus enhance the verification, validation, and the maintenance of expert systems through their life cycles. (C) 1994 John Wiley & Sons, Inc. | Cognitive Relaying With Transceiver Hardware Impairments Under Interference Constraints. In this letter, we analyze the performance of cognitive amplify-and-forward multirelay networks with active direct link in the presence of relay transceiver hardware impairments. Considering distortion noises on both interference and main data links, we derive tight closed-form outage probability expressions and their asymptotic behavior for partial relay selection (PRS) and opportunistic relay se... | 1.2 | 0.1 | 0.033333 | 0.003333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Resource optimization and (min,+) spectral theory We show that certain resource optimization problems relative toTimed Event Graphs reduce to linear programs. The auxiliary variables whichallow this reduction can be interpreted in terms of eigenvectors in the (min,+)algebra.Keywords---Resource Optimization, Timed Event Graphs, (max,+) algebra,spectral theory.I. INTRODUCTIONTimed Event Graphs (TEGs) are a subclass of timed Petri netswhich can be used to model deterministic discrete event dynamicsystems subject to saturation and... | Feedback stabilization of some event graph models The authors introduce several notions of stability for event graph models, timed or not. The stability is similar to the boundedness notion for Petri nets. The event graph models can be controlled by an output feedback which takes information from some observable transitions and can disable some controllable transitions. The controller itself is composed of an event graph. In this framework the authors solve the corresponding stabilization problems, i.e., they wonder if such a controller may prevent the explosion of the number of tokens | Sizing of an industrial plant under tight time constraints using two complementary approaches: (max,+) algebra and computer simulation In this article (max,+) spectral theory results are applied in order to solve the problem of sizing in a real-time constrained plant. The process to control is a discrete event dynamic system without conflict. Therefore, it can be modeled by a timed event graph, a class of Petri net, whose behavior can be described with linear equations in the (max,+) algebra. First the sizing of the process without constraint is solved. Then we propose to design a simulation model of the plant to validate the sizing of the process. | The tropical double description method We develop a tropical analogue of the classical double description method allowing one to compute an internal representation (in terms of vertices) of a polyhedron defined externally (by inequalities). The heart of the tropical algorithm is a characterization of the extreme points of a polyhedron in terms of a system of constraints which define it. We show that checking the extremality of a point reduces to checking whether there is only one minimal strongly connected component in an hypergraph. The latter problem can be solved in almost linear time, which allows us to eliminate quickly redundant generators. We report extensive tests (including benchmarks from an application to static analysis) showing that the method outperforms experimentally the previous ones by orders of magnitude. The present tools also lead to worst case bounds which improve the ones provided by previous methods. | The equation A⊗x=B⊗y over (max, +). For the two-sided homogeneous linear equation system A⊗x=B⊗y over (max,+), with no infinite rows or columns in A or B, an algorithm is presented which converges to a finite solution from any finite starting point whenever a finite solution exists. If the finite elements of A, B are all integers, convergence is in a finite number of steps, for which a precise bound can be calculated if moreover one of A, B has only finite elements. The algorithm is thus pseudopolynomial in complexity. | Rapid prototyping of control systems using high level Petri nets This paper presents a rapid prototyping methodology for the carrying out of control systems in which high level Petri nets provide the common framework to integrate the main phases of software development: specification, validation, performance evaluation, implementation.Petri nets are shown to be translatable into Ada program structures concerning processes and their synchronizations. | Visual feedback for validation of informal specifications In automatically synthesizing simulation models from informal specifications, the ambiguity of natural language (English) leads to multiple interpretations The authors report on a system, called the Model Generator, which provides visual feedback showing the interpretation of specification statements that have been automatically translated to a knowledge representation called conceptual graphs. The visual feedback is based on a combination of block diagrams and Petri net graphs | On Overview of KRL, a Knowledge Representation Language | Histograms of Oriented Gradients for Human Detection We study the question of feature sets for robust visual object recognition, adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of Histograms of Oriented Gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds. | Specification and verification of concurrent systems in CESAR The aim of this paper is to illustrate by an example, the alternating bit protocol, the use of CESAR, an interactive system for aiding the design of distributed applications. | Unintrusive Ways to Integrate Formal Specifications in Practice Formal methods can be neatly woven in with less formal, but more widely-used, industrial-strength methods. We show how to integrate the Larch two-tiered specification method (GHW85a) with two used in the waterfall model of software development: Structured Analysis (Ros77) and Structure Charts (YC79). We use Larch traits to define data elements in a data dictionary and the functionality of basic activities in Structured Analysis data-flow diagrams; Larc h interfaces and traits to define the behavior of modules in Structure Charts. We also show how to integrate loosely formal specification in a prototyping model by discussing ways of refining Larch specifications as code evolves. To provide some realism to our ideas, we draw our examples from a non-trivial Larch specification of the graphical editor for the Miro visual languages (HMT +90). The companion technical report, CMU-CS-91-111, contains the entire specification. | Viewpoints: Requirements honesty This article discusses issues related to the inconsistency between requirements principles and the need for faster and faster ways of developing software. Requirements princi- ples are related to the purpose of the system and to the ap- propriateness of requirements that correctly describe what is necessary for the system to fulfil its objectives. I argue that the quest for speed in software development may have the undesirable effect of weakening these principles. Since the beginnings of software engineering, there is a search for faster ways to develop software. Many tech- niques and development models have been proposed that contribute for shortening development time, although the reduction in time comes almost as a side effect, as a re- sult of improving some key aspect of software development. Agile methods are the first to place time-to-market as the prominent feature. The risk is to view other quality features as secondary. | An algorithm for blob hierarchy layout We present an algorithm for the aesthetic drawing of basic hierarchical blob structures, of the kind found in higraphs and statecharts and in other diagrams in which hierarchy is depicted as topological inclusion. Our work could also be useful in window system dynamics, and possibly also in things like newspaper layout, etc. Several criteria for aesthetics are formulated, and we discuss their motivation, our methods of implementation and the algorithm's performance. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.120678 | 0.1405 | 0.1405 | 0.1124 | 0.073842 | 0.000002 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
An adaptive LS-Based motion prediction algorithm for video coding In this paper, we introduce an adaptive motion vector prediction algorithm to improve the performance of a video encoder. The block-based motion vector can be characterized by the local statistics so that the coefficients of LS-based linear motion predictor can be optimized. However, it requires very expensive computational cost, which is major bottleneck in real-time implementation. In order to resolve the problem, we propose the LS-based motion prediction algorithm using spatially varying motion-directed property, so that the coefficients of the motion predictor can be adaptively controlled, resulting in the reduction of computational cost as well as the prediction error. Experimental results show the capability of the proposed algorithm. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Synergy: A Conceptual Graph Activation-Based Language This paper presents the core of Synergy; an implemented visual multi-paradigm programming language based on executable Conceptual Graph (CG). Execution is based on a CG-activation mechanism for which concept lifecycle, relation propagation rules and referent instantiation constitute the key elements, hi this paper we defme the activation mechanism and the CG structure (concept, relation, context, co-reference) used in Synergy as well as the concept type definition, the encapsulation mechanism and the knowledge base of Synergy. Examples are given to illustrate some aspects of the language. Hybrid object-oriented and concurrent object-oriented use of Synergy are presented in other papers [9, 10]. | Conceptual Structures: Leveraging Semantic Technologies, 17th International Conference on Conceptual Structures, ICCS 2009, Moscow, Russia, July 26-31, 2009. Proceedings | A framework for analyzing and testing requirements with actors in conceptual graphs Software has become an integral part of many people's lives, whether knowingly or not. One key to producing quality software in time and within budget is to efficiently elicit consistent requirements. One way to do this is to use conceptual graphs. Requirements inconsistencies, if caught early enough, can prevent one part of a team from creating unnecessary design, code and tests that would be thrown out when the inconsistency was finally found. Testing requirements for consistency early and automatically is a key to a project being within budget. This paper will share an experience with a mature software project that involved translating software requirements specification into a conceptual graph and recommends several actors that could be created to automate a requirements consistency graph. | Constraints on Processes: Essential Elements for the Validation and Execution of Processes A process is often described as a sequence of actions that changes the state of a system. To make sure that it is semantically valid, it must abide by semantic constraints defining its proper behavior. These constraints are called behavioral constraints. In the past, [1] presented how to describe and structure constraints on conceptual graphs in a declarative yet operational way; and [2] presented a framework to describe and execute processes using conceptual graphs. This paper combines these two approaches to show how processes can be constrained. It also gives two examples showing why constrained processes are needed in real applications. The first example is a database application where migration constraints must be enforced; the second example shows how agent systems must use behavioral constraints in their interaction. By adding behavioral constraints to conceptual graphs based tools, this paper proposes the CG theory as a powerful modeling language not only for data but also for process modeling. | Modelling and Simulating Human Behaviours with Conceptual Graphs This paper describes an application of conceptual graphs in knowledge engineering. We are developing an assistance system for the acquisition and the validation of stereotyped behaviour models in human organizations. The system is built on a representation language requiring to be at expert level, to have a clear semantics and to be interpretable. The proposed language is an extension of conceptual graphs dedicated to the representation of behaviours. Tools exploiting this language are provided to assist the construction of behaviour models and their simulation on concrete cases. | Conceptual Structures: Fulfilling Peirce's Dream, Fifth International Conference on Conceptual Structures, ICCS '97, Seattle, Washington, USA, August 3-8, 1997, Proceedings | Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine. | STeP: Deductive-Algorithmic Verification of Reactive and Real-Time Systems . The Stanford Temporal Prover, STeP, combines deductivemethods with algorithmic techniques to verify linear-time temporal logicspecifications of reactive and real-time systems. STeP uses verificationrules, verification diagrams, automatically generated invariants, modelchecking, and a collection of decision procedures to verify finiteandinfinite-state systems.System Description: The Stanford Temporal Prover, STeP, supports thecomputer-aided formal verification of reactive, real-time... | New Approach to Requirements Trade-Off Analysis for Complex Systems In this paper, we propose a faceted requirement classification scheme for analyzing heterogeneous requirements. The representation of vague requirements is based on Zadeh's canonical form in test-score semantics and an extension of the notion of soft conditions. The trade-off among vague requirements is analyzed by identifying the relationship between requirements, which could be either conflicting, irrelevant, cooperative, counterbalance, or independent. Parameterized aggregation operators, fuzzy and/or, are selected to combine individual requirements. An extended hierarchical aggregation structure is proposed to establish a four-level requirements hierarchy to facilitate requirements and criticalities aggregation through the fuzzy and/or. A compromise overall requirement can be obtained through the aggregation of individual requirements based on the requirements hierarchy. The proposed approach provides a framework for formally analyzing and modeling conflicts between requirements, and for users to better understand relationships among their requirements. | The contract net protocol: high-level communication and control in a distributed problem solver The contract net protocol has been developed to specify problem-solving communication and control for nodes in a distributed problem solver. Task distribution is affected by a negotiation process, a discussion carried on between nodes with tasks to be executed and nodes that may be able to execute those tasks. | Seven basic principles of software engineering This paper attempts to distill the large number of individual aphorisms on good software engineering into a small set of basic principles. Seven principles have been determined which form a reasonably independent and complete set. These are: 1.(1) manage using a phased life-cycle plan. 2.(2) perform continuous validation. 3.(3) maintain disciplined product control. 4.(4) use modern programming practices. 5.(5) maintain clear accountability for results. 6.(6) use better and fewer people. 7.(7) maintain a commitment to improve the process. The overall rationale behind this set of principles is discussed, followed by a more detailed discussion of each of the principles. | Operational specification languages The “operational approach” to software development is based on separation of problem-oriented and implementation-oriented concerns, and features executable specifications and transformational implementation. “Operational specification languages” are executable specification languages designed to fit the goals, assumptions, and strategies of the operational approach. This paper defines the operational approach and surveys the existing operational specification languages, viz., the graphic notation of the Jackson System Development method, PAISLey, Gist, and modern applicative languages. | DRIVE: a tool for developing, deploying, and managing distributed sensor and actuator applications This paper introduces Distributed Responsive Infrastructure-Virtualization Environment (DRIVE), a tool that provides both an integrated development environment (IDE) and an execution environment and thus supports the entire life cycle of sensor/actuator applications. Developers are only responsible for implementing the core event-handling logic, whereas DRIVE generates the necessary code for message passing and invocation, thus reducing the development skills required. The development methodology, which is component based and model driven, separates the solution model, which captures the business logic, from the deployment model, which reflects the physical computing infrastructure. This allows the administrators to configure and deploy applications on various infrastructure topologies. To illustrate the benefits of DRIVE, we present an example application, dock-door receiving, and show the ways in which DRIVE supports the modeling and development of the application logic and the multiphase deployment of the resulting application in a production environment. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.05481 | 0.0575 | 0.0575 | 0.05375 | 0.027657 | 0.001963 | 0.00001 | 0.000002 | 0 | 0 | 0 | 0 | 0 | 0 |
Distributed event algebras A general model, the Distributed Event Algebra or D-algebra, for distributed computation is developed, generalizing Lynch's Event State Algebras. Such models are essential to the construction of effective and convincing proofs of distributed algorithms. D-algebras are designed to allow hierarchical proof techniques. Two notions of mapping between D-algebras are defined. One operates on the level of uninterpreted actions, the other on the level of actions with interpretations as operators on states. A hierarchical proof of correctness using D-algebras consists of the construction of a series of D-algebras, from high level to low level, connected by a series of correctness preserving maps, and a proof of the correctness of the high level D-algebra. D-algebras also incorporate the notion of execution of a system as a partially ordered set of actions, thus reducing overspecification of executions. This results in the state history of a system under a particular execution being modeled as a directed graph, thus capturing all possible state sequences in a single structure. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web. | Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Comparing techniques by means of encapsulation and connascence Today the object-oriented approach to software development is at the height of fashion. As such, it threatens to replace the structured approach which was the staple development approach of the 1970s and 1980s. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Reliable Fuzzy H<sub>∞</sub> Control for Permanent Magnet Synchronous Motor Against Stochastic Actuator Faults This article examines the issue of reliable fuzzy H
<sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">∞</sub>
control with permanent magnet synchronous motor (PMSM) and stochastic actuator faults. The principle target of modeling the reliable control for PMSM is to improve the performance of PMSM in terms of speed of response, tracking accuracy, and robustness. In contrast to work found in the literature, the proposed dynamic model of a PMSM the load torque variation acts as disturbances, speed control strategy is developed based on the Takagi-Sugeno (T-S) fuzzy model and stochastic actuator faults are considered which is more practical and challenging. In such a manner, first, the nonlinear PMSM model is altered into corresponding linear submodels through the sufficient T-S fuzzy membership rules. Then, based on the obtained dynamic model, reliable H
<sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">∞</sub>
control is designed for the considered PMSM. By executing an appropriate Lyapunov-Krasovskii (L-K) functional together with linear matrix inequality (LMI) optimization procedure, Wirtinger-based integral inequality approach, and arrangement of the delay-dependent adequate condition is determined which ensures that the closed-loop PMSM is robust asymptotic stable. Based on the acquired condition, the controller gains are derived by solving a set of LMIs. At last, the simulation results are depicted to validate the efficiency of our presented control method. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Lossless compression of continuous-tone images via context selection, quantization, and modeling. Context modeling is an extensively studied paradigm for lossless compression of continuous-tone images. However, without careful algorithm design, high-order Markovian modeling of continuous-tone images is too expensive in both computational time and space to be practical. Furthermore, the exponential growth of the number of modeling states in the order of a Markov model can quickly lead to the problem of context dilution; that is, an image may not have enough samples for good estimates of conditional probabilities associated with the modeling states. New techniques for context modeling of DPCM errors are introduced that can exploit context-dependent DPCM error structures to the benefit of compression. New algorithmic techniques of forming and quantizing modeling contexts are also developed to alleviate the problem of context dilution and reduce both time and space complexities. By innovative formation, quantization, and use of modeling contexts, the proposed lossless image coder has a highly competitive compression performance and yet remains practical. | Reversible Implementations Of Irreversible Component Transforms And Their Comparisons In Image Compression Reversible color component transforms derived by the LU factorization are briefly described. It is possible to obtain an reversible implementation to a given component transform, even if the original transform is irreversible. Some examples are presented and their performances are compared in image compression. | Localized Lossless Authentication Watermark (LAW) A novel framework is proposed for lossless authentication watermarking of images which allows authentication and recovery of original images without any distortions. This overcomes a significant limitation of traditional authentication watermarks that irreversibly alter image data in the process of watermarking and authenticate the watermarked image rather than the original. In particular, authenticity is verified before full reconstruction of the original image, whose integrity is inferred from the reversibility of the watermarking procedure. This reduces computational requirements in situations when either the verification step fails or the zero-distortion reconstruction is not required. A particular instantiation of the framework is implemented using a hierarchical authentication scheme and the lossless generalized-LSB data embedding mechanism. The resulting algorithm, called localized lossless authentication watermark (LAW), can localize tampered regions of the image; has a low embedding distortion, which can be removed entirely if necessary; and supports public/private key authentication and recovery options. The effectiveness of the framework and the instantiation is demonstrated through examples. | Orientation scanning to improve lossless compression of fingerprint images While standard compression methods available include complex source encoding schemes, the scanning of the image is often performed by a horizontal (row-by-row) or vertical scanning. In this work a new scanning method, called ridge scanning, for lossless compression of fingerprint images is presented. By using ridge scanning our goal is to increase the redundancy in data and thereby increase the compression rate. By using orientations, estimated from the linear symmetry property of local neighbourhoods in the fingerprint, a scanning algorithm which follows the ridges and valleys is developed. The properties of linear symmetry are also used for a segmentation of the fingerprint into two parts, one part which lacks orientation and one that has it. We demonstrate that ridge scanning increases the compression ratio for Lempel-Ziv coding as well as recursive Huffman coding with approximately 3% in average. Compared to JPEG-LS, using ridge scanning and recursive Huffman the gain is 10% in average. | Fast Constant Division Routines When there is no division circuit available, the arithmetical function of division is normally performed by a library subroutine. The library subroutine normally allows both the divisor and the dividend to be variables, and requires the execution of hundreds of assembly instructions. This correspondence provides a fast algorithm for performing the integer division of a variable by a predetermined divisor. Based upon this algorithm, an efficient division routine has been constructed for each odd divisor up to 55. These routines may be implemented in assembly languages, in microcodes, and in special-purpose circuits. | Watermarking digital image and video data. A state-of-the-art overview The authors begin by discussing the need for watermarking and the requirements. They go on to discuss digital watermarking techniques based on correlation and techniques that are not based on correlation | Near-lossless image compression by relaxation-labelled prediction This paper describes a differential pulse code modulation scheme suitable for lossless and near-lossless compression of monochrome still images. The proposed method is based on a classified linear-regression prediction followed by context-based arithmetic coding of the outcome residuals. Images are partitioned into blocks, typically 8 × 8, and a minimum mean square error linear predictor is calculated for each block. Given a preset number of classes, a clustering algorithm produces an initial guess of as many predictors to be fed to an iterative labelling procedure that classifies pixel blocks simultaneously refining the associated predictors. The final set of predictors is encoded, together with the labels identifying the class, and hence the predictor, to which each block belongs. A thorough performance comparison, both lossless and near-lossless, with advanced methods from the literature and both current and upcoming standards highlights the advantages of the proposed approach. The method provides impressive performances, especially on medical images. Coding time are affordable thanks to fast convergence of training and easy balance between compression and computation by varying the system's parameters. Decoding is always real-time thanks to the absence of training. | Reversible data hiding using additive prediction-error expansion Reversible data hiding is a technique that embeds secret data into cover media through an invertible process. In this paper, we propose a reversible data hiding scheme that can embed a large amount of secret data into image with imperceptible modifications. The prediction-error, difference between pixel value and its predicted value, is used to embed a bit '1' or '0' by expanding it additively or leaving it unchanged. Low distortion is guaranteed by limiting pixel change to 1 and averting possible pixel over/underflow; high pure capacity is achieved by adopting effective predictors to greatly exploit pixel correlation and avoiding large overhead like location map. Experimental results demonstrate that the proposed scheme provides competitive performances compared with other state-of-the-art schemes. | Near lossless compression of hyperspectral images based on distributed source coding Effective compression technique of on-board hyperspectral images has been an active topic in the field of hyperspectral remote sensintg.In order to solve the effective compression of on-board hyperspectral images,a new distributed near lossless compression algorithm based on multilevel coset codes is proposed.Due to the diverse importance of each band,a new adaptive rate allocation algorithm is proposed,which allocates rational rate for each band according to the size of weight factor defined for hyperspectral images subject to the target rate constraints.Multiband prediction is introduced for Slepian-Wolf lossless coding and an optimal quantization algorithm is presented under the correct reconstruction of Slepian-Wolf decoder,which minimizes the distortion of reconstructed hyperspectral images under the target rate.Then Slepian-Wolf encoder exploits the correlation of the quantized values to generate the final bit streams.Experimental results show that the proposed algorithm has both higher compression efficiency and lower encoder complexity than several existing classical algorithms. | Credibility evaluation of income data with hierarchical correlation reconstruction. In situations like tax declarations or analyzes of household budgets we would like to automatically evaluate credibility of exogenous variable (declared income) based on some available (endogenous) variables - we want to build a model and train it on provided data sample to predict (conditional) probability distribution of exogenous variable based on values of endogenous variables. Using Polish household budget survey data there will be discussed simple and systematic adaptation of hierarchical correlation reconstruction (HCR) technique for this purpose, which allows to combine interpretability of statistics with modelling of complex densities like in machine learning. For credibility evaluation we normalize marginal distribution of predicted variable to $rhoapprox 1$ uniform distribution on $[0,1]$ using empirical distribution function $(x=EDF(y)in[0,1])$, then model density of its conditional distribution $(textrm{Pr}(x_0|x_1 x_2ldots))$ as a linear combination of orthonormal polynomials using coefficients modelled as linear combinations of features of the remaining variables. These coefficients can be calculated independently, have similar interpretation as cumulants, additionally allowing to directly reconstruct probability distribution. Values corresponding to high predicted density can be considered as credible, while low density suggests disagreement with statistics of data sample, for example to mark for manual verification a chosen percentage of data points evaluated as the least credible. | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | Acquiring Temporal Knowledge from Schedules This paper presents an economical algorithm for generating conceptual graphs from schedules and timing diagrams. The graphs generated are based on activity concepts associated with intervals of the schedules. The temporal conceptual relations selected here are drawn from both interval and endpoint temporal logics in order to minimize the complexity of the generated graphs to no more that k(n–1) temporal relations in a schedule of n intervals over k timelines (resources). Temporal reasoning and consistency checking in terms of the selected temporal relations are briefly reviewed. | Linda-based applicative and imperative process algebras The classical algebraic approach to the specification and verification of concurrent systems is tuned to distributed programs that rely on asynchronous communications and permit explicit data exchange. An applicative process algebra, obtained by embedding the Linda primitives for interprocess communication in a CCS/CSP-like language, and an imperative one, obtained from the applicative variant by adding a construct for explicit assignment of values to variables, are introduced. The testing framework is used to define behavioural equivalences for both languages and sound and complete proof systems for them are described together with a fully abstract denotational model (namely, a variant of Strong Acceptance Trees). | NLP-Based Classifiers to Generalize Expert Assessments in E-Reputation Online Reputation ManagementORM is currently dominated by expert abilities. One of the great challenges is to effectively collect annotated training samples, especially to be able to generalize a small pool of expert feedback from area scale to a more global scale. One possible solution is to use advanced Machine Learning ML techniques, to select annotations from training samples, and propagate effectively and concisely. We focus on the critical issue of understanding the different levels of annotations. Using the framework proposed by the RepLab contest we present a considerable number of experiments in Reputation Monitoring and Author Profiling. The proposed methods rely on a large variety of Natural Language Processing NLP methods exploiting tweet contents and some background contextual information. We show that simple algorithms only considering tweets content are effective against state-of-the-art techniques. | 1.005919 | 0.010216 | 0.00866 | 0.005674 | 0.004315 | 0.002887 | 0.001802 | 0.000528 | 0.000029 | 0.000002 | 0 | 0 | 0 | 0 |
A Novel Aco-Based Static Task Scheduling Approach For Multiprocessor Environments Optimized task scheduling is one of the most important challenges in parallel and distributed systems. In such architectures during compile step, each program is decomposed into the smaller segments so-called tasks. Tasks of a program may be dependent; some tasks may need data generated by the others to start. To formulate the problem, precedence constraints, required execution times of tasks, and communication costs among them are modeled using a directed acyclic graph (DAG) named task-graph. The tasks must be assigned to a predefined number of processors in such a way that the program completion time is minimized, and the precedence constraints are preserved. It is well known to be NP-hard in general form and most restricted cases; therefore, a number of heuristic and meta-heuristic approaches have so far been proposed in the literature to find near-optimum solutions for this problem. We believe that ant colony optimization (ACO) is one of the best methods to cope with such kind of problems presented by graph. ACO is a metaheuristic approach inspired from social behavior of real ants. It is a multi-agent approach in which artificial ants (agents) try to find the shortest path to solve the given problem using an indirect local communication called stigmergy. Stigmergy lets ACO to be fast and efficient in comparison with other metaheuristics and evolutionary algorithms. In this paper, artificial ants, in a cooperative manner, try to solve static task scheduling problem in homogeneous multiprocessor environments. Set of different experiments on various task-graphs has been conducted, and the results reveal that the proposed approach outperforms the conventional methods from the performance point of view. | Improving the performance of Apache Hadoop on pervasive environments through context-aware scheduling. This article proposes to improve Apache Hadoop scheduling through a context-aware approach. Apache Hadoop is the most popular implementation of the MapReduce paradigm for distributed computing, but its design does not adapt automatically to computing nodes’ context and capabilities. By introducing context-awareness into Hadoop, we intent to dynamically adapt its scheduling to the execution environment. This is a necessary feature in the context of pervasive grids, which are heterogeneous, dynamic and shared environments. The solution has been incorporated into Hadoop and assessed through controlled experiments. The experiments demonstrate that context-awareness provides comparative performance gains, especially when some of the resources disappear during execution. | A lightweight decentralized service placement policy for performance optimization in fog computing A decentralized optimization policy for service placement in fog computing is presented. The optimization is addressed to place most popular services as closer to the users as possible. The experimental validation is done in the iFogSim simulator and by comparing our algorithm with the simulator’s built-in policy. The simulation is characterized by modeling a microservice-based application for different experiment sizes. Results showed that our decentralized algorithm places most popular services closer to users, improving network usage and service latency of the most requested applications, at the expense of a latency increment for the less requested services and a greater number of service migrations. | Static Homogeneous Multiprocessor Task Graph Scheduling Using Ant Colony Optimization. Nowadays, the utilization of multiprocessor environments has been increased due to the increase in time complexity of application programs and decrease in hardware costs. In such architectures during the compilation step, each program is decomposed into the smaller and maybe dependent segments so-called tasks. Precedence constraints, required execution times of the tasks, and communication costs among them are modeled using a directed acyclic graph (DAG) named task-graph. All the tasks in the task-graph must be assigned to a predefined number of processors in such a way that the precedence constraints are preserved, and the program's completion time is minimized, and this is an NP-hard problem from the time-complexity point of view. The results obtained by different approaches are dominated by two major factors; first, which order of tasks should be selected (sequence subproblem), and second, how the selected sequence should be assigned to the processors (assigning subproblem). In this paper, a hybrid proposed approach has been presented, in which two different artificial ant colonies cooperate to solve the multiprocessor task-scheduling problem; one colony to tackle the sequence subproblem, and another to cope with assigning subproblem. The utilization of background knowledge about the problem (different priority measurements of the tasks) has made the proposed approach very robust and efficient. 125 different task-graphs with various shape parameters such as size, communication-to-computation ratio and parallelism have been utilized for a comprehensive evaluation of the proposed approach, and the results show its superiority versus the other conventional methods from the performance point of view. | Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without... | Automatic speech recognition- an approach for designing inclusive games Computer games are now a part of our modern culture. However, certain categories of people are excluded from this form of entertainment and social interaction because they are unable to use the interface of the games. The reason for this can be deficits in motor control, vision or hearing. By using automatic speech recognition systems (ASR), voice driven commands can be used to control the game, which can thus open up the possibility for people with motor system difficulty to be included in game communities. This paper aims at find a standard way of using voice commands in games which uses a speech recognition system in the backend, and that can be universally applied for designing inclusive games. Present speech recognition systems however, do not support emotions, attitudes, tones etc. This is a drawback because such expressions can be vital for gaming. Taking multiple types of existing genres of games into account and analyzing their voice command requirements, a general ASRS module is proposed which can work as a common platform for designing inclusive games. A fuzzy logic controller proposed then is to enhance the system. The standard voice driven module can be based on algorithm or fuzzy controller which can be used to design software plug-ins or can be included in microchip. It then can be integrated with the game engines; creating the possibility of voice driven universal access for controlling games. | A novel method for solving the fully neutrosophic linear programming problems The most widely used technique for solving and optimizing a real-life problem is linear programming (LP), due to its simplicity and efficiency. However, in order to handle the impreciseness in the data, the neutrosophic set theory plays a vital role which makes a simulation of the decision-making process of humans by considering all aspects of decision (i.e., agree, not sure and disagree). By keeping the advantages of it, in the present work, we have introduced the neutrosophic LP models where their parameters are represented with a trapezoidal neutrosophic numbers and presented a technique for solving them. The presented approach has been illustrated with some numerical examples and shows their superiority with the state of the art by comparison. Finally, we conclude that proposed approach is simpler, efficient and capable of solving the LP models as compared to other methods. | Secure Medical Data Transmission Model for IoT-Based Healthcare Systems. Due to the significant advancement of the Internet of Things (IoT) in the healthcare sector, the security, and the integrity of the medical data became big challenges for healthcare services applications. This paper proposes a hybrid security model for securing the diagnostic text data in medical images. The proposed model is developed through integrating either 2-D discrete wavelet transform 1 level (2D-DWT-1L) or 2-D discrete wavelet transform 2 level (2D-DWT-2L) steganography technique with a proposed hybrid encryption scheme. The proposed hybrid encryption schema is built using a combination of Advanced Encryption Standard, and Rivest, Shamir, and Adleman algorithms. The proposed model starts by encrypting the secret data; then it hides the result in a cover image using 2D-DWT-1L or 2D-DWT-2L. Both color and gray-scale images are used as cover images to conceal different text sizes. The performance of the proposed system was evaluated based on six statistical parameters; the peak signal-to-noise ratio (PSNR), mean square error (MSE), bit error rate (BER), structural similarity (SSIM), structural content (SC), and correlation. The PSNR values were relatively varied from 50.59 to 57.44 in case of color images and from 50.52 to 56.09 with the gray scale images. The MSE values varied from 0.12 to 0.57 for the color images and from 0.14 to 0.57 for the gray scale images. The BER values were zero for both images, while SSIM, SC, and correlation values were ones for both images. Compared with the state-of-the-art methods, the proposed model proved its ability to hide the confidential patient's data into a transmitted cover image with high imperceptibility, capacity, and minimal deterioration in the received stego-image. | Symbolic Model Checking Symbolic model checking is a powerful formal specification and verification method that has been applied successfully in several industrial designs. Using symbolic model checking techniques it is possible to verify industrial-size finite state systems. State spaces with up to 1030 states can be exhaustively searched in minutes. Models with more than 10120 states have been verified using special techniques. | Strategies for information requirements determination Correct and complete information requirements are key ingredients in planning organizational information systems and in implementing information systems applications. Yet, there has been relatively little research on information requirements determination, and there are relatively few practical, well-formulated procedures for obtaining complete, correct information requirements. Methods for obtaining and documenting information requirements are proposed, but they tend to be presented as general solutions rather than alternative methods for implementing a chosen strategy of requirements determination. This paper identifies two major levels of requirements: the organizational information requirements reflected in a planned portfolio of applications and the detailed information requirements to be implemented in a specific application. The constraints on humans as information processors are described in order to explain why "asking" users for information requirements may not yield a complete, correct set. Various strategies for obtaining information requirements are explained. Examples are given of methods that fit each strategy. A contingency approach is then presented for selecting an information requirements determination strategy. The contingency approach is explained both for defining organizational information requirements and for defining specific, detailed requirements in the development of an application. | A superimposition control construct for distributed systems A control structure called a superimposition is proposed. The structure contains schematic abstractions of processes called roletypes in its declaration. Each roletype may be bound to processes from a basic distributed algorithm, and the operations of the roletype will then execute interleaved with those of the basic processes, over the same state space. This structure captures a kind of modularity natural for distributed programming, which previously has been treated using a macro-like implantation of code. The elements of a superimposition are identified, a syntax is suggested, correctness criteria are defined, and examples are presented. | Behavioral Subtyping, Specification Inheritance, and Modular Reasoning 2006 CR Categories: D. 2.2 [Software Engineering] Design Tools and Techniques, Object-oriented design methods; D. 2.3 [Software Engineering] Coding Tools and Techniques, Object-oriented programming; D. 2.4 [Software Engineering] Software/Program Verification, Class invariants, correctness proofs, formal methods, programming by contract, reliability, tools, Eiffel, JML; D. 2.7 [Software Engineering] Distribution, Maintenance, and Enhancement, Documentation; D. 3.1 [Programming Languages] Formal Definitions and Theory, Semantics; D. 3.2 [Programming Languages] Language Classifications, Object-oriented languages; D. 3.3 [Programming Languages] Language Constructs and Features, classes and objects, inheritance; F. 3.1 [Logics and Meanings of Programs] Specifying and Verifying and Reasoning about Programs, Assertions, invariants, logics of programs, pre-and post-conditions, specification techniques; | Reflection in direct style A reflective language enables us to access, inspect, and/or modify the language semantics from within the same language framework. Although the degree of semantics exposure differs from one language to another, the most powerful approach, referred to as the behavioral reflection, exposes the entire language semantics (or the language interpreter) that defines behavior of user programs for user inspection/modification. In this paper, we deal with the behavioral reflection in the context of a functional language Scheme. In particular, we show how to construct a reflective interpreter where user programs are interpreted by the tower of metacircular interpreters and have the ability to change any parts of the interpreters during execution. Its distinctive feature compared to the previous work is that the metalevel interpreters observed by users are written in direct style. Based on the past attempt of the present author, the current work solves the level-shifting anomaly by defunctionalizing and inspecting the top of the continuation frames. The resulting system enables us to freely go up and down the levels and access/modify the direct-style metalevel interpreter. This is in contrast to the previous system where metalevel interpreters were written in continuation-passing style (CPS) and only CPS functions could be exposed to users for modification. | Hyperspectral image compression based on lapped transform and Tucker decomposition In this paper, we present a hyperspectral image compression system based on the lapped transform and Tucker decomposition (LT-TD). In the proposed method, each band of a hyperspectral image is first decorrelated by a lapped transform. The transformed coefficients of different frequencies are rearranged into three-dimensional (3D) wavelet sub-band structures. The 3D sub-bands are viewed as third-order tensors. Then they are decomposed by Tucker decomposition into a core tensor and three factor matrices. The core tensor preserves most of the energy of the original tensor, and it is encoded using a bit-plane coding algorithm into bit-streams. Comparison experiments have been performed and provided, as well as an analysis regarding the contributing factors for the compression performance, such as the rank of the core tensor and quantization of the factor matrices. HighlightsWe design a hyperspectral image compression using lapped transform and Tucker decomposition.Each band of a hyperspectral image is decorrelated by a lapped transform.Transformed coefficients of various frequencies are rearranged in 3Dwavelet subband structures.3D subbands are viewed as third-order tensors, decomposed by Tucker decomposition.The core tensor is encoded using a bit-plane coding algorithm into bit-streams. | 1.101667 | 0.103333 | 0.103333 | 0.101667 | 0.051667 | 0.001667 | 0.000667 | 0.000056 | 0 | 0 | 0 | 0 | 0 | 0 |
Evaluating document filtering systems over time. •We propose a new way of measuring document filtering system performance over time.•Performance is calculated per batch and a trend line is fitted to the results.•Systems are compared by their performance at the end of the evaluation period.•Important insights emerge by re-evaluating TREC KBA CCR runs of 2012 and 2013. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
An object-based approach to protocol software implementation In this paper, an object-based approach to protocol software implementation is presented. A protocol is specified by an FSM, then the FSM is implemented by a group of related objects. In our method, each state is implemented by an object. The member functions of an object are the interface vents that trigger state transitions, and actions associated with state transitions constitute the body of the member functions. An object becomes another object if a state transition is enabled. A real example is given for illustration. We also present a software tool that lets a designer edit a state machine graphically, and generates C++ class definitions automatically. We also discuss some implementation related issues and present an organization model for protocol layers. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A Rigorous Approach to Combining Use Case Modelling and Accident Scenarios. We describe an approach to embedding a formal method within UML use case modelling. Moreover, we extend use case modelling to allow for the explicit representation of safety concerns. Our motivation comes from interaction with systems and safety engineers who routinely rely upon use case modelling during the early stages of defining and analysing system behaviours. Our chosen formal method is Event-B, which is refinement based and consequently has enabled us to exploit natural abstractions found within use case modelling. By underpinning informal use case modelling with Event-B, we are able to provide greater precision and formal assurance when reasoning about concerns identified by safety engineers as well as the subsequent changes made at the level of use case modelling. To achieve this we have extended use case modelling to include the notion of an accident case. Our approach is currently being implemented, and we have an initial prototype. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Convergence and Equivalence results for the Jensen's inequality - Application to time-delay and sampled-data systems The Jensen's inequality plays a crucial role in the analysis of time-delay and sampled-data systems. Its conservatism is studied through the use of the Gruss Inequality. It has been reported in the literature that fragmentation (or partitioning) schemes allow to empirically improve the results. We prove here that the Jensen's gap can be made arbitrarily small provided that the order of uniform fragmentation is chosen sufficiently large. Nonuniform fragmentation schemes are also shown to speed up the convergence in certain cases. Finally, a family of bounds is characterized and a comparison with other bounds of the literature is provided. It is shown that the other bounds are equivalent to Jensen's and that they exhibit interesting well-posedness and linearity properties which can be exploited to obtain better numerical results. | Bessel inequality for robust stability analysis of time-delay system This paper addresses the problem of the stability analysis for a linear time-delay systems via a robust analysis approach and especially the quadratic separation framework. To this end, we use the Bessel inequality for building operators that depend on the delay. They not only allow us to model the system as an uncertain feedback system but also to control the accuracy of the approximations made. Then, a set of LMIs conditions are proposed which tends on examples to the analytical bounds for both delay dependent stability and delay range stability. | A looped-functional approach for robust stability analysis of linear impulsive systems A new functional-based approach is developed for the stability analysis of linear impulsive systems. The new method, which introduces looped functionals, considers non-monotonic Lyapunov functions and leads to LMI conditions devoid of exponential terms. This allows one to easily formulate dwell-time results, for both certain and uncertain systems. It is also shown that this approach may be applied to a wider class of impulsive systems than existing methods. Some examples, notably on sampled-data systems, illustrate the efficiency of the approach. | A note on equivalence between two integral inequalities for time-delay systems. Jensen’s inequality and extended Jensen’s inequality are two important integral inequalities when problems of stability analysis and controller synthesis for time-delay systems are considered. The extended Jensen’s inequality introduces two additional free matrices and is generally regarded to be less conservative than Jensen’s inequality. The equivalence between Jensen’s inequality and extended Jensen’s inequality in bounding the quadratic term −h∫t−htẋT(s)Zẋ(s)ds in Lyapunov functional of time-delay systems is presented and theoretically proved. It is shown that the extended Jensen’s inequality does not decrease the lower bound of this quadratic term obtained using Jensen’s inequality, and then it does not reduce the conservativeness though two additional free matrices M1 and M2 are involved. | New stability criteria for linear systems with interval time-varying delay This paper investigates robust stability of uncertain linear systems with interval time-varying delay. The time-varying delay is assumed to belong to an interval and is a fast time-varying function. The uncertainty under consideration includes polytopic-type uncertainty and linear fractional norm-bounded uncertainty. A new Lyapunov–Krasovskii functional, which makes use of the information of both the lower and upper bounds of the interval time-varying delay, is proposed to drive some new delay-dependent stability criteria. In order to obtain much less conservative results, a tighter bounding for some term is estimated. Moreover, no redundant matrix variable is introduced. Finally, three numerical examples are given to show the effectiveness of the proposed stability criteria. | A novel stability analysis of linear systems under asynchronous samplings. This article proposes a novel approach to assess the stability of continuous linear systems with sampled-data inputs. The method, which is based on the discrete-time Lyapunov theorem, provides easy tractable stability conditions for the continuous-time model. Sufficient conditions for asymptotic and exponential stability are provided dealing with synchronous and asynchronous samplings and uncertain systems. An additional stability analysis is provided for the cases of multiple sampling periods and packet losses. Several examples show the efficiency of the method. | New stability and stabilization conditions for T-S fuzzy systems with time delay This paper is concerned with the problem of the stability analysis and stabilization for Takagi-Sugeno (T-S) fuzzy systems with time delay. A new Lyapunov-Krasovskii functional containing the fuzzy line-integral Lyapunov function and the simple functional is chosen. By using a recently developed Wirtinger-based integral inequality and introducing slack variables, less conservative conditions in terms of linear matrix inequalities (LMIs) are derived. Several examples are given to show the advantages of the proposed results. | Improved exponential stability criteria for time-varying delayed neural networks This paper is concerned with the exponential stability for neural networks with mixed time-varying delays. By using a more general delay-partitioning approach, an augmented Lyapunov functional that contains some information about neuron activation function is constructed. In order to derive less conservative results, an adjustable parameter is introduced to divide the range of the activation function into two unequal subintervals. Moreover, the application of combination of integral inequalities further reduces the conservativeness of the obtained exponential stability conditions. Numerical examples illustrate the advantages of the proposed conditions when compared with other results from the literatures. | Stability of Recurrent Neural Networks With Time-Varying Delay via Flexible Terminal Method. This brief is concerned with the stability criteria for recurrent neural networks with time-varying delay. First, based on convex combination technique, a delay interval with fixed terminals is changed into the one with flexible terminals, which is called flexible terminal method (FTM). Second, based on the FTM, a novel Lyapunov-Krasovskii functional is constructed, in which the integral interval ... | A logarithmic quantization index modulation for perceptually better data hiding In this paper, a novel arrangement for quantizer levels in the Quantization Index Modulation (QIM) method is proposed. Due to perceptual advantages of logarithmic quantization, and in order to solve the problems of a previous logarithmic quantizationbased method, we used the compression function of µ-Law standard for quantization. In this regard, the host signal is first transformed into the logarithmic domain using the µ-Law compression function. Then, the transformed data is quantized uniformly and the result is transformed back to the original domain using the inverse function. The scalar method is then extended to vector quantization. For this, the magnitude of each host vector is quantized on the surface of hyperspheres which follow logarithmic radii. Optimum parameter µ for both scalar and vector cases is calculated according to the host signal distribution. Moreover, inclusion of a secret key in the proposed method, similar to the dither modulation in QIM, is introduced. Performance of the proposed method in both cases is analyzed and the analytical derivations are verified through extensive simulations on artificial signals. The method is also simulated on real images and its performance is compared with previous scalar and vector quantization-based methods. Results show that this method features stronger a watermark in comparison with conventional QIM and, as a result, has better performance while it does not suffer from the drawbacks of a previously proposed logarithmic quantization algorithm. | Incorporating usability into requirements engineering tools The development of a computer system requires the definition of a precise set of properties or constraints that the system must satisfy with maximum economy and efficiency. This definition process requires a significant amount of communication between the requestor and the developer of the system. In recent years, several methodologies and tools have been proposed to improve this communication process. This paper establishes a framework for examining the methodologies and techniques, charting the progress made, and identifying opportunities to improve the communication capabilities of a requirements engineering tool. | Adapting function point analysis to Jackson system development Overviews of the estimation model function point analysis (FPA) and the operational software development method Jackson system development (JSD) are given. The adaptation to JSD projects of two main versions of the FPA method is described. A number of issues are raised concerning both the applicability of FPA-based techniques to JSD projects and general ways in which FPA estimation might be improved. A summary is presented of the results obtained by applying the two adaptations to an actual commercial JSD project, and various objectives are highlighted for future research | SPMD execution of programs with dynamic data structures on distributed memory machines A combination of language features and compilation techniques that permits SPMD (single-program multiple-data) execution of programs with pointer-based dynamic data structures is presented. The Distributed Dynamic Pascal (DDP) language, which supports the construction and manipulation of local as well as distributed data structures, is described. The compiler techniques developed translate a sequential DDP program for SPMD execution in which all processors are provided with the same program but each processor executes only that part of the program which operates on the elements of the distributed data structures local to the processor. Therefore, the parallelism implicit in a sequential program is exploited. An approach for implementing pointers that is based on the generation of names for the nodes in a dynamic data structure is presented. The name-based strategy makes possible the dynamic distribution of data structures among the processors as well as the traversal of distributed data structures without interprocessor communication | More than requirements: Applying requirements engineering techniques to the challenge of setting corporate intellectual policy, an experience report Creation and adoption of corporate policies requires significant commitment of scarce senior management resources. In the absence of processes and tools, convergence upon final policy and may not be achieved in a timely manner. Significant similarities between policy and requirements documents suggest that requirements engineering techniques could be used to generate policy. However, neither evidence of feasibility of this approach nor theoretical investigation is present in the research literature. This paper reports upon our experience from an exploratory study where well-established requirements engineering methodologies were applied to generate corporate intellectual property policy. Interview, brainstorming and survey techniques were used to successfully apply structure and process to the task, generating a new corporate intellectual property policy that met or exceeded all stakeholder goals. The materials gathered during stakeholder interactions and analysis not only provided functional guidance for the policy itself, but also non-functional guidance with respect to the diversity of stakeholder opinions and the strength with which opinions were held. This knowledge greatly facilitated the creation of draft policy: this insider knowledge increased our expectation of stakeholder acceptance and also facilitated subsequent negotiation efforts. The feasibility of applying RE techniques to crafting corporate policy has been demonstrated and the results show sufficient promise that further investigation is warranted. | 1.026238 | 0.012639 | 0.010325 | 0.006488 | 0.005123 | 0.003706 | 0.000951 | 0.000109 | 0.000056 | 0.000002 | 0 | 0 | 0 | 0 |
Decentralization of process nets with centralized control The behavior of a net of interconnected, communicating processes is described in terms of the joint actions in which the processes can participate. A distinction is made between centralized and decentralized action systems. In the former, a central agent with complete information about the state of the system controls the execution of the actions; in the latter no such agent is needed. Properties of joint action systems are expressed in temporal logic. Centralized action systems allow for simple description of system behavior. Decentralized (two-process) action systems again can be mechanically compiled into a collection of CSP processes. A method for transforming centralized action systems into decentralized ones is described. The correctness of this method is proved, and its use is illustrated by deriving a process net that distributedly sorts successive lists of integers. | Scheduling in Real-Time Models Interleaving semantics is shown to provide an appropriate basis also for the modeling of real-timeproperties. Real-time scheduling of interleaved actions is explored, and the crucial properties ofsuch schedulings are analyzed. The motivation of the work is twofold: to make real-time modelingpractical already at early stages of specification and design, and to increase the reliability and predictabilityof reactive real-time systems by improved insensitivity to changes in the underlying... | Specware: Formal Support for Composing Software | Proof Rules Dealing with Fairness We provide proof rules allowing to deal with two fairness assumptions in the context of Dijkstra's do-od programs. These proof rules are obtained by considering a translated version of the original program which uses random assignment x:=? and admits only fair runs. The proof rules use infinite ordinals and deal with the original programs and not their translated versions. | Abstractions of Distributed Cooperation, their Refinement and Implementation Recognizing the role of abstractions is essential in software development. Communication mechanisms, however; often dictate hole inter-process communication is addressed already at the level of specification. In this paper we show how abstract process cooperation can be refitted into an implementable form, taking into account constraints imposed by practical communication mechanisms. Early phases of the development can then rely on high-level abstractions, allowing simpler formulation and early validation of specifications. In later phases it can be formally verified that the given abstractions remain valid, which increases confidence in the resulting design. | A framework for modeling transfer protocols The notion of specification frameworks transposes the framework approach from software development to the level of formal modeling and analysis. A specification framework is devoted to a special application domain. It supplies re- usable specification modules and guides the construction of specifications. Moreover, it provides theorems to be used as building blocks of verifications. By means of a suitable framework, specification and verification tasks can be reduced to the selection, parametrization and combination of framework elements resulting in a substantial support which opens formal analysis even for real-sized problems. The transfer protocol framework addressed here is devoted to the design of data transfer protocols. Specifications of used and provided communication services as well as protocol specifications can be composed from its specification modules. The theorems correspond to the relations between protocol mechanism combinations and those properties of the provided service which are implemented by them. This article centers on the application of this framework which is discussed with the help of the specification of a sliding window protocol. Moreover the structure of its verification is described. The specification and verification technique applied is based on L. Lamport's temporal logic of actions (TLA). We use the variant cTLA which particularly supports the modeling of process systems. " 2000 Elsevier Science B.V. All rights reserved. | Specifying the Caltech asynchronous microprocessor The action systems framework for modelling parallel programs is used to formally specify a microprocessor. First the microprocessor is specified as a sequential program. The sequential specification is then decomposed and refined into a concurrent program using correctness-preserving program transformations. Previously this microprocessor has been specified at Caltech, where an asynchronous circuit for the microprocessor was derived from the specification. We propose a specification strategy that is based on the idea of spatial decomposition of the program variable space. | Refining Scj Mission Specifications Into Parallel Handler Designs Safety-Critical Java (SCJ) is a recent technology that restricts the execution and memory model of Java in such a way that applications can be statically analysed and certified for their real-time properties and safe use of memory. Our interest is in the development of comprehensive and sound techniques for the formal specification, refinement, design, and implementation of SCJ programs, using a correct-by-construction approach. As part of this work, we present here an account of laws and patterns that are of general use for the refinement of SCJ mission specifications into designs of parallel handlers used in the SCJ programming paradigm. Our notation is a combination of languages from the Circus family, supporting state-rich reactive models with the addition of class objects and real-time properties. Our work is a first step to elicit laws of programming for SCJ and fits into a refinement strategy that we have developed previously to derive SCJ programs. | An example of stepwise refinement of distributed programs: quiescence detection We propose a methodology for the development of concurrent programs and apply it to an important class of problems: quiescence detection. The methodology is based on a novel view of programs. A key feature of the methodology is the separation of concerns between the core problem to be solved and details of the forms of concurrency employed in the target architecture and programming language. We begin development of concurrent programs by ignoring issues dealing with concurrency and introduce such concerns in manageable doses. The class of problems solved includes termination and deadlock detection. | Mathematics of Program Construction, MPC'95, Kloster Irsee, Germany, July 17-21, 1995, Proceedings | Nondeterminacy and recursion via stacks and games The weakest-precondition interpretation of recursive procedures is developed for a language with a combination of unbounded demonic choice and unbounded angelic choice. This compositional formal semantics is proved to be equal to a game-theoretic operational semantics. Two intermediate stages are exploited. One step consists of unfolding the declaration of the recursive procedures. Fixpoint induction is used to prove the validity of this step. The compositional semantics of the unfolded declaration is proved to be equal to a formal semantics of a stack implementation of the recursive procedures. After an introduction to boolean two-person games, this stack semantics is shown to correspond to a game-theoretic operational semantics. | Scale & Affine Invariant Interest Point Detectors In this paper we propose a novel approach for detecting interest points invariant to scale and affine transformations. Our scale and affine invariant detectors are based on the following recent results: (1) Interest points extracted with the Harris detector can be adapted to affine transformations and give repeatable results (geometrically stable). (2) The characteristic scale of a local structure is indicated by a local extremum over scale of normalized derivatives (the Laplacian). (3) The affine shape of a point neighborhood is estimated based on the second moment matrix.Our scale invariant detector computes a multi-scale representation for the Harris interest point detector and then selects points at which a local measure (the Laplacian) is maximal over scales. This provides a set of distinctive points which are invariant to scale, rotation and translation as well as robust to illumination changes and limited changes of viewpoint. The characteristic scale determines a scale invariant region for each point. We extend the scale invariant detector to affine invariance by estimating the affine shape of a point neighborhood. An iterative algorithm modifies location, scale and neighborhood of each point and converges to affine invariant points. This method can deal with significant affine transformations including large scale changes. The characteristic scale and the affine shape of neighborhood determine an affine invariant region for each point.We present a comparative evaluation of different detectors and show that our approach provides better results than existing methods. The performance of our detector is also confirmed by excellent matching results; the image is described by a set of scale/affine invariant descriptors computed on the regions associated with our points. | An overview of CIM enterprise modeling methodologies Computer integrated manufacturing (CIM) systems are increasingly being used as weapons by manufacturing enterprises in competitive business environments. The complicated nature of these systems and the high initial investment requirements have necessitated their accurate modeling. A number of models, modeling methodologies, and modeling tools have been developed and used for this purpose. We first present a brief overview of several CIM models as well as modeling tools and methods. Many of the models are said to emphasize only a part of the system. A concern in the research community is that these models must be integrated. We conclude the paper by examining the rationale and feasibility of integrating the different models and/or creating integrated models. | Generalized Jensen Inequalities with Application to Stability Analysis of Systems with Distributed Delays over Infinite Time-Horizons. The Jensen inequality has been recognized as a powerful tool to deal with the stability of time-delay systems. Recently, a new inequality that encompasses the Jensen inequality was proposed for the stability analysis of systems with finite delays. In this paper, we first present a generalized integral inequality and its double integral extension. It is shown how these inequalities can be applied to improve the stability result for linear continuous-time systems with gamma-distributed delays. Then, for the discrete-time counterpart we provide an extended Jensen summation inequality with infinite sequences, which leads to less conservative stability conditions for linear discrete-time systems with Poisson-distributed delays. The improvements obtained by the introduced generalized inequalities are demonstrated through examples. | 1.001153 | 0.002587 | 0.002146 | 0.001997 | 0.001811 | 0.001549 | 0.001323 | 0.000976 | 0.000666 | 0.000181 | 0.000002 | 0 | 0 | 0 |
Lossless generalized-LSB data embedding We present a novel lossless (reversible) data-embedding technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known least significant bit (LSB) modification is proposed as the data-embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes unaltered portions of the host signal as side-information improves the compression efficiency and, thus, the lossless data-embedding capacity. | Reducing location map in prediction-based difference expansion for reversible image data embedding In this paper, we present a reversible data embedding scheme based on an adaptive edge-directed prediction for images. It is known that the difference expansion is an efficient data embedding method. Since the expansion on a large difference will cause a significant embedding distortion, a location map is usually employed to select small differences for expansion and to avoid overflow/underflow problems caused by expansion. However, location map bits lower payload capacity for data embedding. To reduce the location map, our proposed scheme aims to predict small prediction errors for expansion by using an edge detector. Moreover, to generate a small prediction error for each pixel, an adaptive edge-directed prediction is employed which adapts reasonably well between smooth regions and edge areas. Experimental results show that our proposed data embedding scheme for natural images can achieve a high embedding capacity while keeping the embedding distortion low. | Reversible watermark with large capacity using the predictive coding A reversible watermarking algorithm with large capacity has been developed by applying the difference expansion of a generalized integer transform. In this algorithm, a watermark signal is inserted in the LSB of the difference values among pixels. In this paper, we apply the prediction errors calculated by a predictor in JPEG-LS for embedding a watermark signal, which contributes to increase the amount of embedded information with less degradation. As one of the drawbacks discovered in the above conventional method is a large size of the embedded location map introduced to make it reversible, we decrease the large size of the location map by vectorization, and then modify the composition of the map using the local characteristic in order to enhance the performance of JBIG2. | Low distortion transform for reversible watermarking. This paper proposes a low-distortion transform for prediction-error expansion reversible watermarking. The transform is derived by taking a simple linear predictor and by embedding the expanded prediction error not only into the current pixel but also into its prediction context. The embedding ensures the minimization of the square error introduced by the watermarking. The proposed transform introduces less distortion than the classical prediction-error expansion for complex predictors such as the median edge detector or the gradient-adjusted predictor. Reversible watermarking algorithms based on the proposed transform are analyzed. Experimental results are provided. | Reversible data hiding scheme based on neighboring pixel differences In this paper, we propose a reversible data hiding algorithm for grayscale images. Specifically, our algorithm is based on the histogram modification technique. The premise of this algorithm is that a histogram is constructed from the differences between each pixel and its neighbors. In the data embedding process, a modified histogram shifting scheme is used to embed a secret message into the pixels whose pixel difference is located at the peak value within the histogram. Experimental results show that our algorithm can achieve higher embedding capacity and imperceptible distortion. Performance comparisons with other existing algorithms are also provided to demonstrate the feasibility of our proposed algorithm in reversible data hiding. | Improved rhombus interpolation for reversible watermarking by difference expansion The paper proposes an interpolation error expansion reversible watermarking algorithm. The main novelty of the paper is a modified rhombus interpolation scheme. The four horizontal and vertical neighbors are considered and, depending on their values, the interpolated pixel is computed as the average of the horizontal pixels, of the vertical pixels or of the entire set of four pixels. Experimental results are provided. The proposed scheme outperforms the results obtained by using the average on the four horizontal and vertical neighbors and the ones obtained by using well known predictors as MED or GAP. | LOCO-I: a low complexity, context-based, lossless image compression algorithm LOCO-I (low complexity lossless compression for images) is a novel lossless compression algorithm for continuous-tone images which combines the simplicity of Huffman coding with the compression potential of context models, thus “enjoying the best of both worlds.” The algorithm is based on a simple fixed context model, which approaches the capability of the more complex universal context modeling techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with a collection of (context-conditioned) Huffman codes, which is realized with an adaptive, symbol-wise, Golomb-Rice code. LOCO-I attains, in one pass, and without recourse to the higher complexity arithmetic coders, compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. In fact, LOCO-I is being considered by the ISO committee as a replacement for the current lossless standard in low-complexity applications | Universal coding, information, prediction, and estimation A connection between universal codes and the problems of prediction and statistical estimation is established. A known lower bound for the mean length of universal codes is sharpened and generalized, and optimum universal codes constructed. The bound is defined to give the information in strings relative to the considered class of processes. The earlier derived minimum description length criterion for estimation of parameters, including their number, is given a fundamental information, theoretic justification by showing that its estimators achieve the information in the strings. It is also shown that one cannot do prediction in Gaussian autoregressive moving average (ARMA) processes below a bound, which is determined by the information in the data. | Splitting-Integrating Method for Normalizing Images by Inverse Transformations The splitting-integrating method is a technique developed for the normalization of images by inverse transformation. It does not require solving nonlinear algebraic equations and is much simpler than any existing algorithm for the inverse nonlinear transformation. Moreover, its solutions have a high order of convergence, and the images obtained through T/sup -1/ are free from superfluous holes and blanks, which often occur in transforming digitized images by other approaches. Application of the splitting-integrating method can be extended to supersampling in computer graphics, such as picture transformations by antialiasing, inverse nonlinear mapping, etc. | Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies. | Knowledge Representation and Reasoning in the Design of Composite Systems The design process that spans the gap between the requirements acquisition process and the implementation process, in which the basic architecture of a system is defined, and functions are allocated to software, hardware, and human agents. is studied. The authors call this process composite system design. The goal is an interactive model of composite system design incorporating deficiency-driven design, formal analysis, incremental design and rationalization, and design reuse. They discuss knowledge representations and reasoning techniques that support these goals for the product (composite system) that they are designing, and for the design process. To evaluate the model, the authors report on its use to reconstruct the design of two existing composite systems rationally. | Software benchmarking In software, “benchmarking” usually compares two companies' practices and results, but, occasionally, it involves sets of companies. For example, there are benchmark comparisons of industry software, such as insurance software, military software, telecommunication software, commercial software, and the like. In other domains, “benchmark” usually means the collection of a substantial body of quantitative data. Benchmark comparisons of various computers, for example, rate their relative performance in at least half a dozen categories. Historically, software benchmarks have been qualitative rather than quantitative. Even the Software Engineering Institute's Capability Maturity Model (SEI CMM) is essentially a qualitative benchmark that ranks company performance on a five-point scale that lacks quantification for specific quality and productivity levels | Addressing degraded service outcomes and exceptional modes of operation in behavioural models A dependable software system should attempt to at least partially satisfy user goals if full service provision is impossible due to an exceptional situation. In addition, a dependable system should evaluate the effects of the exceptional situation on future service provision and adjust the set of services it promises to deliver accordingly. In this paper we show how to express degraded service outcomes and exceptional modes of operation in behavioural models, i.e. use cases, activity diagrams and state charts. We also outline how to integrate the task of discovering and defining degraded outcomes and exceptional modes of operation into a requirements engineering process by presenting the relevant parts of our dependability-focused requirements engineering process DREP. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1.014561 | 0.015098 | 0.014318 | 0.009327 | 0.006498 | 0.00288 | 0.000187 | 0.000032 | 0.000001 | 0 | 0 | 0 | 0 | 0 |
Reversible Data Hiding with Pixel Prediction and Additive Homomorphism for Encrypted Image. Data hiding in encrypted image is a recent popular topic of data security. In this paper, we propose a reversible data hiding algorithm with pixel prediction and additive homomorphism for encrypted image. Specifically, the proposed algorithm applies pixel prediction to the input image for generating a cover image for data embedding, referred to as the preprocessed image. The preprocessed image is then encrypted by additive homomorphism. Secret data is finally embedded into the encrypted image via modular 256 addition. During secret data extraction and image recovery, addition homomorphism and pixel prediction are jointly used. Experimental results demonstrate that the proposed algorithm can accurately recover original image and reach high embedding capacity and good visual quality. Comparisons show that the proposed algorithm outperforms some recent algorithms in embedding capacity and visual quality. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Requirements Engineering: The Emerging Wisdom Developments in requirements engineering, as in system development, have come in waves. The next wave of requirements techniques and tools should account for the problem and development context, accommodate incompleteness, and recognize and exploit the non-absolute nature of user needs. | Identifying Quality-Requirement Conflicts Despite well-specified functional and interface requirements, many software projects have failed because they had a poor set of quality-attribute requirements. To find the right balance of quality-attribute requirements, you must identify the conflicts among desired quality attributes and work out a balance of attribute satisfaction. We have developed The Quality Attribute Risk and Conflict Consultant, a knowledge-based tool that can be used early in the system life cycle to identify potential conflicts. QARCC operates in the context of the WinWin system, a groupware support system that determines software and system requirements as negotiated win conditions. This article summarizes our experiences developing the QARCC-1 prototype using an early version of WinWin, and our integration of the resulting improvements into QARCC-2. | Requirements engineering in 2001: (virtually) managing a changing reality Trends in society and technology force requirements engineering to expand its role from a one-shot activity in the development process to a virtual image that accompanies the changing reality of a system. A maturing software market also requires a better understanding of the differentiation in market segments for requirements engineering and standardisation of methodologies within these segments. On the research side, this requires a coherent perspective of hitherto parallel research directions towards a comprehensive understanding of requirements processes, as well as the optimal exploitation of new technologies that support the main role of requirements engineering; mutual learning of all stakeholders concerned | Status report: requirements engineering It is argued that, in general, requirements engineering produces one large document, written in a natural language, that few people bother to read. Projects that do read and follow the document often build systems that do not satisfy needs. The reasons for the current state of the practice are listed. Research areas that have significant payoff potential, including improving natural-language specifications, rapid prototyping and requirements animation, requirements clustering, requirements-based testing, computer-aided requirements engineering, requirements reuse, research into methods, knowledge engineering, formal methods, and a unified framework, are outlined.<> | A field study of the software design process for large systems The problems of designing large software systems were studied through interviewing personnel from 17 large projects. A layered behavioral model is used to analyze how three of these problems—the thin spread of application domain knowledge, fluctuating and conflicting requirements, and communication bottlenecks and breakdowns—affected software productivity and quality through their impact on cognitive, social, and organizational processes. | The feature and service interaction problem in telecommunications systems: a survey Today's telecommunications systems are enhanced by a large and steadily growing number of supplementary services, each of which consists of a set of service features. A situation where a combination of these services behaves differently than expected from the single services' behaviors, is called service interaction. This interaction problem is considered as a major obstacle to the introduction of new services into telecommunications networks. In this contribution, we give a survey of the work carried out in this field during the last decade. After a brief review of classification criteria that exist for feature interactions so far, we use a perspective we call the emergence level view. This perspective pays respect to the fact that the sources for interactions can be of many different kinds, like, e.g., requirement conflicts or resource contentions. It is used to rationalize the impossibility of coping with the problem with one single approach. Afterwards, we present a framework of four different criteria in order to classify the approaches dealing with the problem: The general kind of approach taken, a refinement of the well-known detection, resolution, and prevention categories, serves as the main classification criterion. It is complemented by the method used, the stage during the feature lifecycle where an approach applies, and the system (network) context. The major results of the different approaches are then presented briefly using this classification framework. We finally draw some conclusions on the applicability of this framework and on possible directions of further research in this field. | A comparative analysis of methodologies for database schema integration One of the fundamental principles of the database approach is that a database allows a nonredundant, unified representation of all data managed in an organization. This is achieved only when methodologies are available to support integration across organizational and application boundaries. Methodologies for database design usually perform the design activity by separately producing several schemas, representing parts of the application, which are subsequently merged. Database schema integration is the activity of integrating the schemas of existing or proposed databases into a global, unified schema. The aim of the paper is to provide first a unifying framework for the problem of schema integration, then a comparative review of the work done thus far in this area. Such a framework, with the associated analysis of the existing approaches, provides a basis for identifying strengths and weaknesses of individual methodologies, as well as general guidelines for future improvements and extensions | Semantic Interoperability - Context, Issues and Research Directions An increasing dependence and cooperation between organisations has created a need for many enterprises to access remote as well as local information sources. Thus, it becomes important to be able to interconnect existing, heterogeneous information systems. One form of heterogeneity is semantic heterogeneity, which occurs when there is a disagreement regarding the interpretation and intended use of related information, or when the same phenomenon in a Universe of Discourse is modelled in different ways in two systems. In this paper, we survey the basic problems caused by semantic heterogeneity and suggest a number of research directions that address these problems. | Requirements Validation Through Viewpoint Resolution A specific technique-viewpoint resolution-is proposed as a means of providing early validation of the requirements for a complex system, and some initial empirical evidence of the effectiveness of a semi-automated implementation of the technique is provided. The technique is based on the fact that software requirements can and should be elicited from different viewpoints, and that examination of the differences resulting from them can be used as a way of assisting in the early validation of requirements. A language for expressing views from different viewpoints and a set of analogy heuristics for performing a syntactically oriented analysis of views are proposed. This analysis of views is capable of differentiating between missing information and conflicting information, thus providing support for viewpoint resolution. | Distributed Termination Discussed is a distributed system based on communication among disjoint processes, where each process is capable of achieving a post-condition of its local space in such a way that the conjunction of local post-conditions implies a global post-condition of the whole system. The system is then augmented with extra control communication in order to achieve distributed termination, without adding new channels of communication. The algorithm is applied to a problem of constructing a sorted partition. | A software design method for real-time systems DARTS—a design method for real-time systems—leads to a highly structured modular system with well-defined interfaces and reduced coupling between tasks. | Robust contour decomposition using a constant curvature criterion The problem of decomposing an extended boundary or contour into simple primitives is addressed with particular emphasis on Laplacian-of-Gaussian zero-crossing contours. A technique is introduced for partitioning such contours into constant curvature segments. A nonlinear 'blip' filter matched to the impairment signature of the curvature computation process, an overlapped voting scheme, and a sequential contiguous segment extraction mechanism are used. This technique is insensitive to reasonable changes in algorithm parameters and robust to noise and minor viewpoint-induced distortions in the contour shape, such as those encountered between stereo image pairs. The results vary smoothly with the data, and local perturbations induce only local changes in the result. Robustness and insensitivity are experimentally verified. | Construction of Finite Labelled Transistion Systems from B Abstract Systems In this paper, we investigate how to represent the behaviour of B abstract systems by finite labelled transition systems (LTS). We choose to decompose the state of an abstract system in several disjunctive predicates. These predicates provide the basis for defining a set of states which are the nodes of the LTS, while the events are the transitions. We have carried out a connection between the B environment (Atelier B) and the Cæsar/Aldebaran Development Package (CADP) which is able to deal with LTS. We illustrate the method by developing the SCSI-2 (Small Computer Systems Interface) input-output system. Finally, we discuss about the outcomes of this method and about its applicability. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.075283 | 0.009718 | 0.007163 | 0.005 | 0.001157 | 0.000336 | 0.0002 | 0.000137 | 0.000085 | 0.000022 | 0 | 0 | 0 | 0 |
Lyapunov-Krasovskii functionals for predictor feedback control of linear systems with multiple input delays This paper is concerned with the Lyapunov-Krasovskii functional construction of linear control systems with multiple input delays. By transforming the predictor feedback control systems into a delay-free linear system with external inputs, a Lyapunov-Krasovskii functional is constructed in terms of a set of linear matrix inequalities (LMIs). It is shown that the solvability of this set of LMIs is equivalent to the asymptotic stability of the delay-free linear system induced from the predictor feedback control system. The proposed Lyapunov-Krasovskii functional is also found to be an ISS Lyapunov-Krasovskii functional for the predictor feedback control systems. An example is worked out to validate the effectiveness of the proposed method. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
An introduction to assertional reasoning for concurrent systems This is a tutorial introduction to assertional reasoning based on temporal logic. The objective is to provide a working familiarity with the technique. We use a simple system model and a simple proof system, and we keep to a minimum the treatment of issues such as soundness, completeness, compositionality, and abstraction. We model a concurrent system by a state transition system and fairness requirements. We reason about such systems using Hoare logic and a subset of linear-time temporal logic, specifically, invariant assertions and leads-to assertions. We apply the method to several examples. | Weakest precondition semantics for time and concurrency A weakest precondition semantics for a real-time concurrent language is defined. An example in verification is presented, and the use of predicate transformers as the basis of a refinement calculus is also discussed. | Protocol verification as a hardware design aid The role of automatic formal protocol verificationin hardware design is considered. Principlesare identified that maximize the benefits of protocolverification while minimizing the labor andcomputation required. A new protocol descriptionlanguage and verifier (both called Mur") are described,along with experiences in applying themto two industrial protocols that were developed aspart of hardware designs.1 IntroductionMost complex digital designs must be regarded as... | The Model Checker SPIN SPIN is an efficient verification system for models of distributed software systems. It has been used to detect design errors in applications ranging from high-level descriptions of distributed algorithms to detailed code for controlling telephone exchanges. This paper gives an overview of the design and structure of the verifier, reviews its theoretical foundation, and gives an overview of significant practical applications. | Automatic verification of finite-state concurrent systems using temporal logic specifications We give an efficient procedure for verifying that a finite-state concurrent system meets a specification expressed in a (propositional, branching-time) temporal logic. Our algorithm has complexity linear in both the size of the specification and the size of the global state graph for the concurrent system. We also show how this approach can be adapted to handle fairness. We argue that our technique can provide a practical alternative to manual proof construction or use of a mechanical theorem prover for verifying many finite-state concurrent systems. Experimental results show that state machines with several hundred states can be checked in a matter of seconds. | Decentralization of process nets with centralized control The behavior of a net of interconnected, communicating processes is described in terms of the joint actions in which the processes can participate. A distinction is made between centralized and decentralized action systems. In the former, a central agent with complete information about the state of the system controls the execution of the actions; in the latter no such agent is needed. Properties of joint action systems are expressed in temporal logic. Centralized action systems allow for simple description of system behavior. Decentralized (two-process) action systems again can be mechanically compiled into a collection of CSP processes. A method for transforming centralized action systems into decentralized ones is described. The correctness of this method is proved, and its use is illustrated by deriving a process net that distributedly sorts successive lists of integers. | The Challenge of Probabilistic Event B - Extended Abstract Among the many opportunities offered by computational semantics for probability, the challenge of probabilistic Event B (pEB) is one of the most attractive. The B method itself is now almost 20 years old, and has been much improved and adapted over that time by the many projects to which it has been applied, and by its philosophy -right from the start- that it must be practical, effective and amenable to tool support.; more recently, Event B has extended it and altered its style of use. The probabilisticprogram semantics we appeal to is even older (in Kozen's original form), but has only recently been "revived" in the context of B-style abstraction and refinement. The especial attraction of putting the two together is the likely interplay between the probabilistic theory, on the one hand, and the decades of practical experience that have by now been built-in to the B approach, on the other. In particular, there are areas where a full theoretical treatment of probability, concurrency, abstraction and refinement-all at once-seems prohibitively complex; and yet in practice either the complexities seldom occur, or the exigencies of B's having been so-often applied to real, nontoy problems has forced it to evolve styles for avoiding such complexities. In short, we want to use (event) B to guide us towards the issues that truly are important. Rabin's randomized mutual-exclusion algorithm is used as a motivating case study. | Normal form approach to compiler design This paper demonstrates how reduction to normal form can help in the design of a correct compiler for Dijkstra's guarded command language. The compilation strategy is to transform a source program, by a series of algebraic manipulations, into a normal form that describes the behaviour of a stored-program computer. Each transformation eliminates high-level language constructs in favour of lower-level constructs. The correctness of the compiler follows from the correctness of each of the algebraic transformations. | Stepwise Refinement of Action Systems A method for the formal development of provably correct parallel algorithms by stepwise refinement is presented. The entire derivation procedure is carried out in the context of purely sequential programs. The resulting parallel algorithms can be efficiently executed on different architectures. The methodology is illustrated by showing the main derivation steps in a construction of a parallel algorithm for matrix multiplication. | Goal-directed requirements acquisition Requirements analysis includes a preliminary acquisition step where a global model for the specification of the system and its environment is elaborated. This model, called requirements model, involves concepts that are currently not supported by existing formal specification languages, such as goals to be achieved, agents to be assigned, alternatives to be negotiated, etc. The paper presents an approach to requirements acquisition which is driven by such higher-level concepts. Requirements models are acquired as instances of a conceptual meta-model. The latter can be represented as a graph where each node captures an abstraction such as, e.g., goal, action, agent, entity, or event, and where the edges capture semantic links between such abstractions. Well-formedness properties on nodes and links constrain their instances—that is, elements of requirements models. Requirements acquisition processes then correspond to particular ways of traversing the meta-model graph to acquire appropriate instances of the various nodes and links according to such constraints. Acquisition processes are governed by strategies telling which way to follow systematically in that graph; at each node specific tactics can be used to acquire the corresponding instances. The paper describes a significant portion of the meta-model related to system goals, and one particular acquisition strategy where the meta-model is traversed backwards from such goals. The meta-model and the strategy are illustrated by excerpts of a university library system. | The use of lexical affinities in requirements extraction The use of lexical afftnities to help a human requirements analyst find abstractions in problem descriptions is explored. It is hoped that a lexical athnities tinding tool can be used as part of an environment to help organize the sentences and phrases of a natural language problem description to aid the requirements analyst in the extraction of requirements. An experiment to confirm its effectiveness is described. The first steps in the development of any computational system should be the writing of requirements with the client's help. It may be necessary to build a prototype tirst, but ultimately before building a production-quality version, it is necessary to agree upon what is to be in the system. Winchester and Bstrin (34) list a number of requirements for the requimments them- selves. The main of these from the programmer-client perspective ate that the requirements must be understandable to both the customers and the designers and builders; the parts of the requirements must be consistent with each other; and the requirements must be complete so that the designers and builders do not have to make unintended value judgements during their WOdL This paper deals ultimately with, describes, and determines the effectiveness of one tool designed to assist in one part of the psocess of writ- ing requirements. It is essential that the reader understand the context in which this tool is expected to operate. Hence, Sections 2 through 5 are devoted to briefly describing this context. desired system should do. These views range from being totally unrelated to each other to being totally inconsistent with each other. It is no wonder that the distillation of these views into a consistent, complete, and unambiguous statement of the requirements. albeit in natural language. is a major part of the problem of developing software which meets the client's needs. Tbere- fore, it is essential to have methods and took that help in distilling these many views into coharent requirements. 3. PAST WORK There are already a variety of systems, tools, snd methods for dealing with requirements. These include SADT 128,271, IORL 1311, PSL/PSA (32), RDL (34), RSL (5,6,7), RML (ll) and Burstin's prototype (12) tool. The tirst two are graphically oriented, and the second of these is automated. The remainder work from highly constrained subsets of English consisting of sentences, each of which states one requirement to which the final imple- mentation must adhere. These sentences can be considered as relations in a database. Those which ate automated have tools for working with the sen- tences and abstractions of the requirements document once these sentences and abstractions have been recognized and stated. Due to space limitations, only those having a direct impact on this wodc am described in detail herein. A | Exits in the Refinement Calculus Although many programming languages contain exception handling mechanisms, their formal treatment — necessary for rigorous development — can be complex. Nevertheless, this paper presents a simple incorporation ofexit commands and exception blocks into a rigorous program development method. The refinement calculus, chosen for the exercise, is a method of developing imperative programs. It is based on weakest preconditions, although they are not used explicitly during program construction; they merely justify the general method. In the style of the refinement calculus, program development laws are given that introduce and allow the manipulation ofexits. The soundness of the new laws is shown using weakest preconditions (as for the existing refinement calculus laws). The extension of weakest preconditions needed to handleexits is a variation on earlier work of Cristian; the variation is necessary to handle nondeterminism. | A Hypergraph-based Framework for Visual Interaction with Databases The advent of graphical workstations has led to a new generation of interaction tools in database systems, where the use of graphics greatly enhances the quality of the interaction. Yet, Visual Query Languages present some limitations, deriving partly from their own paradigm and partly from the available technology. One of the basic drawbacks is the lack of formalization, in contrast to the well-established traditional languages. In this paper we propose a theoretical framework for visual interaction with databases, having a particular kind of hypergraph, the Structure Modeling Hypergraph (SMH), as a representation tool, able to capture the features of existing data models. SMHs profit from the basic property of diagrams while overcoming their limitations. Notable characteristics of SMHs are: uniform and unified representation of intensional and extensional aspects of databases, direct representation of containment relationships, and immediate applicability of direct manipulation primitives. SMHs are not a new data model but a new representation language that provides the syntactic rules for describing the structuring mechanisms of data models. SMHs can be queried by formal systems closed under queries. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.075101 | 0.05 | 0.016667 | 0.004554 | 0.002273 | 0.00026 | 0.00007 | 0.000021 | 0.000005 | 0 | 0 | 0 | 0 | 0 |
Design for a multiprocessing heap with on-board reference counting Without Abstract | The development of the MU5 computer system Following a brief outline of the background of the MU5 project, the aims and ideas for MU5 are discussed. A description is then given of the instruction set, which includes a number of features conducive to the production of efficient compiled code from high-level language source programs. The design of the processor is then traced from the initial ideas for an associatively addressed “name store” to the final multistage pipeline structure involving a prediction mechanism for instruction prefetching and a function queue for array element accessing. An overall view of the complete MU5 complex is presented together with a brief indication of its performance. | The Architecture of Lisp Machines First Page of the Article | A transputer-based parallel Lisp implementation This paper reports the effort made to implement BaLinda Lisp, a parallel Lisp dialecL on Uansputer arrays. BaLinda lisp supports the FUTURE construct to initiate parallel execution threads, speculative constructs to spawn parallel tasks for nxmlts that may be requittxl, and tuple space operations to enforce the proper communication, synchronization, mutual exclusion and shared variable access for parallel tasks. A suite of application programs has been tested on the resulting interpreter and some performance results are presented. The results demonstrate that the interpreter achieves realistic parallelism and provides a high speed symbolic processing environment on transputers. | Formal Derivation of Strongly Correct Concurrent Programs. Summary A method is described for deriving concurrent programs which are consistent with the problem specifications and free from
deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of
synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant
and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences
associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary
variables is also given. The applicability of the techniques presented is discussed through various examples; their use for
verification purposes is illustrated as well. | A mathematical perspective for software measures research Basic principles which necessarily underlie software measures research are analysed. In the prevailing paradigm for the validation of software measures, there is a fundamental assumption that the sets of measured documents are ordered and that measures should report these orders. The authors describe mathematically, the nature of such orders. Consideration of these orders suggests a hierarchy of software document measures, a methodology for developing new measures and a general approach to the analytical evaluation of measures. They also point out the importance of units for any type of measurement and stress the perils of equating document structure complexity and psychological complexity | Distributed snapshots: determining global states of distributed systems This paper presents an algorithm by which a process in a distributed system determines a global state of the system during a computation. Many problems in distributed systems can be cast in terms of the problem of detecting global states. For instance, the global state detection algorithm helps to solve an important class of problems: stable property detection. A stable property is one that persists: once a stable property becomes true it remains true thereafter. Examples of stable properties are “computation has terminated,” “ the system is deadlocked” and “all tokens in a token ring have disappeared.” The stable property detection problem is that of devising algorithms to detect a given stable property. Global state detection can also be used for checkpointing. | ACE: building interactive graphical applications | Duality in specification languages: a lattice-theoretical approach A very general lattice-based language of commands, based on theprimitive operations of substitution and test for equality, isconstructed. This base language permits unbounded nondeterminism,demonic and angelic nondeterminism. A dual language permitting miraclesis constructed. Combining these two languages yields an extended baselanguage which is complete, in the sense that all monotonic predicatetransformers can be constructed in it. The extended base languageprovides a unifying framework for various specification languages; weshow how two Dijkstra-style specification languages can be embedded init.—Authors' Abstract | Abstract Syntax and Semantics of Visual Languages The effective use of visual languages requires a precise understanding of their meaning. Moreover, it is impossible to prove properties of visual languages like soundness of transformation rules or correctness results without having a formal language definition. Although this sounds obvious, it is surprising that only little work has been done about the semantics of visual languages, and even worse, there is no general framework available for the semantics specification of different visual languages. We present such a framework that is based on a rather general notion of abstract visual syntax. This framework allows a logical as well as a denotational approach to visual semantics, and it facilitates the formal reasoning about visual languages and their properties. We illustrate the concepts of the proposed approach by defining abstract syntax and semantics for the visual languages VEX, Show and Tell and Euler circles. We demonstrate the semantics in action by proving a rule for visual reasoning with Euler circles and by showing the correctness of a Show and Tell program. | Foundations of 4Thought 4Thought, a prototype design tool, is based on the notion that design artifacts are complex, formal, mathematical objects that require complementary textual and graphical views to be adequately comprehended. This paper describes the combined use of Entity- Relationship modelling and GraphLog to bridge the textual and graphical views. These techniques are illustrated by an example that is formally specified in Z Notation. | Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered. | Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.2 | 0.2 | 0.133333 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
An Investigative Search Engine for the Human Trafficking Domain. Enabling intelligent search systems that can navigate and facet on entities, classes and relationships, rather than plain text, to answer questions in complex domains is a longstanding aspect of the Semantic Web vision. This paper presents an investigative search engine that meets some of these challenges, at scale, for a variety of complex queries in the human trafficking domain. The engine provides a real-world case study of synergy between technology derived from research communities as diverse as Semantic Web (investigative ontologies, SPARQLinspired querying, Linked Data), Natural Language Processing (knowledge graph construction, word embeddings) and Information Retrieval (fast, user-driven relevance querying). The search engine has been rigorously prototyped as part of the DARPA MEMEX program and has been integrated into the latest version of the Domain-specific Insight Graph (DIG) architecture, currently used by hundreds of US law enforcement agencies for investigating human trafficking. Over a hundred millions ads have been indexed. The engine is also being extended to other challenging illicit domains, such as securities and penny stock fraud, illegal firearm sales, and patent trolling, with promising results. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Re-weighting Linear Discrimination Analysis under ranking loss Linear discrimination analysis (LDA) is one of the most popular feature extraction and classifier design techniques. It maximizes the Fisher-ratio between between-class scatter matrix and within-class scatter matrix under a linear transformation, and the transformation is composed of the generalized eigenvectors of them. However, Fisher criterion itself can not decide the optimum norm of transformation vectors for classification. In this paper, we show that actually the norm of the transformation vectors has strong influence on classification performance, and we propose a novel method to estimate the optimum norm of LDA under the ranking loss, re-weighting LDA. On artificial data and real databases, the experiments demonstrate the proposed method can effectively improve the performance of LDA classifiers. And the algorithm can also be applied to other LDA variants such as non parametric discriminant analysis (NDA) to improve theirs performance further. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Linear hybrid action systems Action Systems is a predicate transformer based formalism. It supports the development of provably correct reactive and distributed systems by refinement. Recently, Action Systems were extended with a differential action. It is used for modelling continuous behaviour, thus, allowing the use of refinement in the development of provably correct hybrid systems, i.e, a discrete controller interacting with some continuously evolving environment. However, refinement as a method is concerned with correctness issues only. It offers very little guidance in what details one should consider during the refinement steps to make the system more robust. That information is revealed by robustness analysis. Other formalisms not supporting refinement do have tool support for automating the robustness analysis, e.g., HyTech for linear hybrid automata. Consequently, we study in this paper the non-trivial translation problem between Action Systems and linear hybrid automata. As the main contribution, we give and prove correct an algorithm that translates a linear hybrid action system to a linear hybrid automaton. With this algorithm we combine the strengths of the two formalisms: we may use HyTech for the robustness analysis to guide the development by refinement. | Semantics, Orderings and Recursion in the Weakest Precondition Calculus An extension of Dijkstra's guarded command language is studied, including sequentialcomposition, demonic choice and a backtrack operator. We consider three orderingson this language: a refinement ordering defined by Back, a new deadlock ordering, and anapproximation ordering of Nelson. The deadlock ordering is in between the two other order-ings. All operators are monotonic in Nelson's ordering, but backtracking is not monotonicin Back's ordering and sequential composition is not... | Reverse engineering distributed algorithms Distributed systems are difficult for a human being to comprehend, informal reasoning about the many parallel and decentralized activities in these systems is not trustworthy, Therefore formal tools for construction and maintenance of distributed systems are needed, We introduce a formal approach to reverse engineering distributed systems that is based on a technique we call coarsement, The idea is that an implementation is stepwise turned into a high level specification through a number of intermediate coarsement steps that preserve the basic functionality of the implementation, The method gives structure to a distributed algorithm that can now be seen as consisting of a number of layers interacting with each other, Each coarsement step produces one such layer, Furthermore, after the coarsement steps the algorithm is easier to understand and to reason about than the original one due to this layering, We show the practical feasibility of the coarsement approach to reverse engineering by analysing a non-trivial distributed algorithm that maintains the routeing information for message passing among a set of processing nodes in a distributed network. | Data Refinement and Remote Procedures Recently the action systems formalism for parallel and distributed sys- tems has been extended with the procedure mechanism. This gives us a very general framework for describing difierent communication paradigms for action systems, e.g. remote procedure calls. Action systems come with a design methodology based on the reflnement calculus. Data reflnement is a powerful technique for reflning action systems. In this paper we will develop a theory and proof rules for the reflnement of action systems that communicate via remote procedures based on the data reflnement approach. The proof rules we develop are compositional so that modular reflnement of action systems is supported. As an example we will espe- cially study the atomicity reflnement of actions. This is an important reflnement strategy, as it potentially increases the degree of parallelism in an action system. | Stepwise Refinement of Action Systems A method for the formal development of provably correct parallel algorithms by stepwise refinement is presented. The entire derivation procedure is carried out in the context of purely sequential programs. The resulting parallel algorithms can be efficiently executed on different architectures. The methodology is illustrated by showing the main derivation steps in a construction of a parallel algorithm for matrix multiplication. | Decentralization of process nets with centralized control The behavior of a net of interconnected, communicating processes is described in terms of the joint actions in which the processes can participate. A distinction is made between centralized and decentralized action systems. In the former, a central agent with complete information about the state of the system controls the execution of the actions; in the latter no such agent is needed. Properties of joint action systems are expressed in temporal logic. Centralized action systems allow for simple description of system behavior. Decentralized (two-process) action systems again can be mechanically compiled into a collection of CSP processes. A method for transforming centralized action systems into decentralized ones is described. The correctness of this method is proved, and its use is illustrated by deriving a process net that distributedly sorts successive lists of integers. | A distributed algorithm to implement n-party rendezvous The concept of n-party rendezvous has been proposed to implement synchronous communication among an arbitrary number of concurrent, asynchronous processes. The problem of implementing n-party rendezvous captures two central issues in the design of distributed systems: exclusion and synchronization. This paper describes a simple, distributed algorithm, referred to as the event manager algorithm, to implement n-party rendezvous. It also compares the performance of this algorithm with an existing algorithm for this problem. | ImpUNITY: UNITY with Procedures and Local Variables In this paper we present the ImpUNITY framework, a framework that supports the developmentof parallel and distributed programs from specification to implementation in astepwise manner. The ImpUNITY framework is an extension of UNITY, as introduced byChandy and Misra, with features of the Action System formalism of Back and Kurki-Suonio.Due to this extension, the ImpUNITY framework is more suitable for the implementationphase of the develop process. Therefore, it supports local variables... | Mathematics of Program Construction, MPC'95, Kloster Irsee, Germany, July 17-21, 1995, Proceedings | Completeness and Consistency in Hierarchical State-Based Requirements This paper describes methods for automatically analyzing formal, state-based requirements specifications for some aspects of completeness and consistency. The approach uses a low-level functional formalism, simplifying the analysis process. State-space explosion problems are eliminated by applying the analysis at a high level of abstraction; i.e., instead of generating a reachability graph for analysis, the analysis is performed directly on the model. The method scales up to large systems by decomposing the specification into smaller, analyzable parts and then using functional composition rules to ensure that verified properties hold for the entire specification. The analysis algorithms and tools have been validated on TCAS II, a complex, airborne, collision-avoidance system required on all commercial aircraft with more than 30 passengers that fly in U.S. airspace. | PARIS: a system for reusing partially interpreted schemas This paper describes PARIS, an implemented system that facilitates the reuse of partially interpreted schemas. A schema is a program and specification with abstract, or uninterpreted, entities. Different interpretations of those entities will produce different programs. The PARIS System maintains a library of such schemas and provides an interactive mechanism to interpret a schema into a useful program by means of partially automated matching and verification procedures. | Network Topology and a Case Study in TCOZ Object-Z is strong in modeling the data and operations of complex systems.However, it is weak in specifying real-time and concurrent systems.The Timed Communicating Object-Z (TCOZ) extends Object-Z notation withTimed CSP's constructs. TCOZ is particularly well suited for specifying complexsystems whose components have their own thread of control. This paperdemonstrates expressiveness of the TCOZ notation through a case study onspecifying a multi-lift system that operates in real-time.1... | Measuring process flexibility and agility In their attempt to improve their systems and architectures, organizations need to be aware of the types of flexibility and agility and the current level of each type of flexibility and agility. Flexibility is the general ability to react to changes, whilst agility is the speed in responding to variety and changes Both flexibility and agility are diverse concepts that are hard to grasp. In this paper the types of flexibility and agility of business processes is discussed on a foundation level and an approach to measure the level of flexibility and agility is proposed. A case study of the flexibility and agility measurement is used to demonstrate the approach. The illustration is used to discuss the difficulties and limitations of the measurement approach. There is no uniform definition of or view on flexibility and agility. This makes it hard to develop a measurement approach. Furthermore, as business processes can be different, this might result in different metrics for measuring the level of flexibility and agility. There is no single measure and for each type of business process and flexibility and agility should always measure by a combination of metrics. In addition, both qualitative and quantitative metrics should be used to measure the level of flexibility and agility. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.105303 | 0.052671 | 0.02 | 0.008337 | 0.002657 | 0.000311 | 0.000057 | 0.000009 | 0.000001 | 0 | 0 | 0 | 0 | 0 |
State estimation of neutral Markovian jump systems: A relaxed L-K functional approach. This paper investigates the state estimation problem of uncertain neutral delay systems with Markovian jumping parameters. First, a novel Lyapunov–Krasovskii (L–K) functional containing the interconnected information between neutral and discrete delay is proposed. Then, based on Jensen’s integral and Wirtinger-based inequality, the obtained results are improved by relaxing the positive-definiteness restrictions on Lyapunov matrices. Third, the state estimators are designed to guarantee the asymptotic stability of the error state system. Finally, numerical examples are provided to show the effectiveness of the proposed results. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
V}-Measure: A Conditional Entropy-Based External Cluster Evaluation Measure | On Clustering Validation Techniques Cluster analysis aims at identifying groups of similar objects and, therefore helps to discover distribution of patterns and interesting correlations in large data sets. It has been subject of wide research since it arises in many application domains in engineering, business and social sciences. Especially, in the last years the availability of huge transactional and experimental data sets and the arising requirements for data mining created needs for clustering algorithms that scale and can be applied in diverse domains.This paper introduces the fundamental concepts of clustering while it surveys the widely known clustering algorithms in a comparative way. Moreover, it addresses an important issue of clustering process regarding the quality assessment of the clustering results. This is also related to the inherent features of the data set under concern. A review of clustering validity measures and approaches available in the literature is presented. Furthermore, the paper illustrates the issues that are under-addressed by the recent algorithms and gives the trends in clustering process. | Evaluating document clustering for interactive information retrieval We consider the problem of organizing and browsing the top ranked portion of the documents returned by an information retrieval system. We study the effectiveness of a document organization in helping a user to locate the relevant material among the retrieved documents as quickly as possible. In this context we examine a set of clustering algorithms and experimentally show that a clustering of the retrieved documents can be significantly more effective than traditional ranked list approach. We also show that the clustering approach can be as effective as the interactive relevance feedback based on query expansion while retaining an important advantage -- it provides the user with a valuable sense of control over the feedback process. | Simulating simple user behavior for system effectiveness evaluation Information retrieval effectiveness evaluation typically takes one of two forms: batch experiments based on static test collections, or lab studies measuring actual users interacting with a system. Test collection experiments are sometimes viewed as introducing too many simplifying assumptions to accurately predict the usefulness of a system to its users. As a result, there is great interest in creating test collections and measures that better model user behavior. One line of research involves developing measures that include a parameterized user model; choosing a parameter value simulates a particular type of user. We propose that these measures offer an opportunity to more accurately simulate the variance due to user behavior, and thus to analyze system effectiveness to a simulated user population. We introduce a Bayesian procedure for producing sampling distributions from click data, and show how to use statistical tools to quantify the effects of variance due to parameter selection. | Automatic selection of noun phrases as document descriptors in an FCA-Based information retrieval system Automatic attribute selection is a critical step when using Formal Concept Analysis (FCA) in a free text document retrieval framework. Optimal attributes as document descriptors should produce smaller, clearer and more browsable concept lattices with better clustering features. In this paper we focus on the automatic selection of noun phrases as document descriptors to build an FCA-based IR framework. We present three different phrase selection strategies which are evaluated using the Lattice Distillation Factor and the Minimal Browsing Area evaluation measures. Noun phrases are shown to produce lattices with good clustering properties, with the advantage (over simple terms) of being better intensional descriptors from the user's point of view. | Retrieval evaluation with incomplete information This paper examines whether the Cranfield evaluation methodology is robust to gross violations of the completeness assumption (i.e., the assumption that all relevant documents within a test collection have been identified and are present in the collection). We show that current evaluation measures are not robust to substantially incomplete relevance judgments. A new measure is introduced that is both highly correlated with existing measures when complete judgments are available and more robust to incomplete judgment sets. This finding suggests that substantially larger or dynamic test collections built using current pooling practices should be viable laboratory tools, despite the fact that the relevance information will be incomplete and imperfect. | UNED Online Reputation Monitoring Team at RepLab 2013. | A bootstrapping approach for training a NER with conditional random fields In this paper we present a bootstrapping approach for training a Named Entity Recognition (NER) system. Our method starts by annotating persons' names on a dataset of 50,000 news items. This is performed using a simple dictionary-based approach. Using such training set we build a classification model based on Conditional Random Fields (CRF). We then use the inferred classification model to perform additional annotations of the initial seed corpus, which is then used for training a new classification model. This cycle is repeated until the NER model stabilizes. We evaluate each of the bootstrapping iterations by calculating: (i) the precision and recall of the NER model in annotating a small gold-standard collection (HAREM); (ii) the precision and recall of the CRF bootstrapping annotation method over a small sample of news; and (iii) the correctness and the number of new names identified. Additionally, we compare the NER model with a dictionary-based approach, our baseline method. Results show that our bootstrapping approach stabilizes after 7 iterations, achieving high values of precision (83%) and recall (68%). | State-Based Model Checking of Event-Driven System Requirements It is demonstrated how model checking can be used to verify safety properties for event-driven systems. SCR tabular requirements describe required system behavior in a format that is intuitive, easy to read, and scalable to large systems (e.g. the software requirements for the A-7 military aircraft). Model checking of temporal logics has been established as a sound technique for verifying properties of hardware systems. An automated technique for formalizing the semiformal SCR requirements and for transforming the resultant formal specification onto a finite structure that a model checker can analyze has been developed. This technique was effective in uncovering violations of system invariants in both an automobile cruise control system and a water-level monitoring system. | Generalized kraft inequality and arithmetic coding Algorithms for encoding and decoding finite strings over a finite alphabet are described. The coding operations are arithmetic involving rational numbers li as parameters such that ∑i2−li≤2−ε. This coding technique requires no blocking, and the per-symbol length of the encoded string approaches the associated entropy within ε. The coding speed is comparable to that of conventional coding methods. | Generating Executable Scenarios from Natural Language Bridging the gap between the specification of software requirements and actual execution of the behavior of the specified system has been the target of much research in recent years. We have created a natural language interface, which, for a useful class of systems, yields the automatic production of executable code from structured requirements. In this paper we describe how our method uses static and dynamic grammar for generating live sequence charts (LSCs), that constitute a powerful executable extension of sequence diagrams for reactive systems. We have implemented an automatic translation from controlled natural language requirements into LSCs, and we demonstrate it on two sample reactive systems. | An Overview of JPEG-2000 JPEG-2000 is an emerging standard for still image compression. This paper provides a brief history of the JPEG-2000 standardization process, an overview of the standard, and some description of the capabilities provided by the standard. Part I of the JPEG-2000 standard specifies the minimum compliant decoder, while Part II describes optional, value-added extensions. Although the standard specifies only the decoder and bitstream syntax, in this paper we describe JPEG-2000 from the point of view of encoding. We take this approach, as we believe it is more amenable to a compact description more easily understood by most readers. | Optimal Priority Assignment Algorithms for Probabilistic Real-Time Systems. | Analysis and Design of Secure Massive MIMO Systems in the Presence of Hardware Impairments. To keep the hardware costs of future communications systems manageable, the use of low-cost hardware components is desirable. This is particularly true for the emerging massive multiple-input multiple-output (MIMO) systems which equip base stations (BSs) with a large number of antenna elements. However, low-cost transceiver designs will further accentuate the hardware impairments, which are presen... | 1.063158 | 0.068948 | 0.040658 | 0.020528 | 0.020329 | 0.00695 | 0.000087 | 0.000036 | 0 | 0 | 0 | 0 | 0 | 0 |
Relaxed conditions for stability of time-varying delay systems. In this paper, the problem of delay-dependent stability analysis of time-varying delay systems is investigated. Firstly, a new inequality which is the modified version of free-matrix-based integral inequality is derived, and then by aid of this new inequality, two novel lemmas which are relaxed conditions for some matrices in a Lyapunov function are proposed. Based on the lemmas, improved delay-dependent stability criteria which guarantee the asymptotic stability of the system are presented in the form of linear matrix inequality (LMI). Two numerical examples are given to describe the less conservatism of the proposed methods. | Networked control system with asynchronous samplings and quantizations in both transmission and receiving channels. This study addresses a problem of the controlling networked control systems (NCSs) which is consisted of the continuous-time plant and controller. In both transmission and receiving channels, asynchronous sampling and different logarithmic quantization effects are considered. By categorizing three cases of asynchronous sampling and using two properties of quantizer which are sector bounded and convex combination, sufficient conditions of the existence of desired controllers for each asynchronous case are presented in the form of linear matrix inequalities (LMIs). Simulation results are given to illustrate the validity of the proposed methods. | Novel Lyapunov-Krasovskii functional with delay-dependent matrix for stability of time-varying delay systems.
This paper investigates the stability criteria of time-varying delay systems with known bounds of the delay and its derivative. To obtain a tighter bound of integral term, quadratic generalized free-weighting matrix inequality (QGFMI) is proposed. Furthermore, a novel augmented LyapunovKrasovskii functional (LKF) are constructed with a delay-dependent matrix, which impose the information for a bound of delay derivative. Relaxed stability condition using QGFMI and LKF provides a larger delay bound with low computational burden. The superiority of the proposed stability condition is verified by comparing to recent results. | Pinning Event-Triggered Sampling Control for Synchronization of T-S Fuzzy Complex Networks With Partial and Discrete-Time Couplings This paper focuses on the synchronization problem of Takagi-Sugeno (T-S) fuzzy complex networks with partial and discrete-time couplings via event-triggered sampling control. Different from traditional control methods, a more general and practical event-triggered communication scheme with nonuniform sampling is newly designed for T–S fuzzy complex networks. Then, a Lyapunov–Krasovskii functional (LKF) with a novel input-delay-product-type (IDPT) term is presented. The IDPT term can fully capture the information of the nonlinear functions and the actual sampling pattern. Based on the new IDPT LKF, less conservative synchronization criteria are derived. Meanwhile, by solving a set of linear matrix inequalities, the desired pinning control gains are precisely obtained. Simulation examples are provided to illustrate the effectiveness and superiorities of the proposed results. | Improved approaches on adaptive event-triggered output feedback control of networked control systems. This paper studies the static output-feedback control in a class of networked control systems. Different from the existing results, the transmission of control signals is based on a novel adaptive event-triggered scheme, where the adaptive thresholds depend on the dynamic error of the system rather than predetermined constants as the traditional ones. The amount of the releasing data is regulated by the adaptive thresholds that play an essential role in decision of whether releasing the sampled data or not. Through fully using the information on network-induced delay and introducing two adjusting parameters, an augmented Lyapunov–Krasovskii (L–K) functional is constructed. Especially, some novel Wirtinger-based integral inequalities are utilized to reconsider those previously ignored information, which can help reduce the conservatism. Furthermore, a novel constructive method is developed to obtain the controller gain by solving the achieved linear matrix inequalities (LMIs). Finally, three numerical examples are given to illustrate the efficiency of the presented results. | Stability analysis of Lur'e systems with additive delay components via a relaxed matrix inequality.
This paper is concerned with the stability analysis of Lure systems with sector-bounded nonlinearity and two additive time-varying delay components. In order to accurately understand the effect of time delays on the system stability, the extended matrix inequality for estimating the derivative of the LyapunovKrasovskii functionals (LKFs) is employed to achieve the conservatism reduction of stability criteria. It reduces estimation gap of the popular reciprocally convex combination lemma (RCCL). Combining the extended matrix inequality and two types of LKFs lead to several stability criteria, which are less conservative than the RCCL-based criteria under the same LKFs. Finally, the advantages of the proposed criteria are demonstrated through two examples. | Some novel approaches on state estimation of delayed neural networks. This paper studies the issue of state estimation for a class of neural networks (NNs) with time-varying delay. A novel Lyapunov-Krasovskii functional (LKF) is constructed, where triple integral terms are used and a secondary delay-partition approach (SDPA) is employed. Compared with the existing delay-partition approaches, the proposed approach can exploit more information on the time-delay intervals. By taking full advantage of a modified Wirtinger's integral inequality (MWII), improved delay-dependent stability criteria are derived, which guarantee the existence of desired state estimator for delayed neural networks (DNNs). A better estimator gain matrix is obtained in terms of the solution of linear matrix inequalities (LMIs). In addition, a new activation function dividing method is developed by bringing in some adjustable parameters. Three numerical examples with simulations are presented to demonstrate the effectiveness and merits of the proposed methods. | A new looped-functional for stability analysis of sampled-data systems. In this paper, a new two-sided looped-functional is introduced for stability analysis of sampled-data systems. The functional fully utilizes the information on both the intervals x(t) to x(tk) and x(t) to x(tk+1). Based on the two-sided functional, an improved stability condition is derived in the form of linear matrix inequality (LMI). Numerical examples show that the result computed by the presented condition approximates nearly the theoretical bound (bound obtained by eigenvalue analysis) and outperforms substantially others in the existing literature. | Wirtinger-based multiple integral inequality for stability of time-delay systems. ABSTRACTNote that the conservatism of the delay-dependent stability criteria can be reduced by increasing the integral terms in Lyapunov–Krasovskii functional (LKF). This brief revisits the stability problem for a class of linear time-delay systems via multiple integral approach. The novelty of this brief lies in that a Wirtinger-based multiple integral inequality is employed to estimate the derivative of a class of LKF with multiple integral terms. Based on these innovations, a new delay-dependent stability criterion is derived in terms of linear matrix inequalities. Two numerical examples are exploited to demonstrate the effectiveness and superiority of the proposed method. | A New Digital Image Watermarking Algorithm Resilient to Desynchronization Attacks Synchronization is crucial to design a robust image watermarking scheme. In this paper, a novel feature-based image watermarking scheme against desynchronization attacks is proposed. The robust feature points, which can survive various signal-processing and affine transformation, are extracted by using the Harris-Laplace detector. A local characteristic region (LCR) construction method based on the scale-space representation of an image is considered for watermarking. At each LCR, the digital watermark is repeatedly embedded by modulating the magnitudes of discrete Fourier transform coefficients. In watermark detection, the digital watermark can be recovered by maximum membership criterion. Simulation results show that the proposed scheme is invisible and robust against common signal processing, such as median filtering, sharpening, noise adding, JPEG compression, etc., and desynchronization attacks, such as rotation, scaling, translation, row or column removal, cropping, and random bend attack, etc. | Beyond models and metaphors: visual formalisms in user interface design The user interface has both syntactic functions-supplying commands and arguments to programs-and semantic functions-visually presenting application semantics and supporting problem solving cognition. The authors argue that though both functions are important, it is time to devote more resources to the problems of the semantic interface. Complex problem solving activities, e.g. for design and analysis tasks, benefit from clear visualizations of application semantics in the user interface. Designing the semantic interface requires computational building blocks capable of representing and visually presenting application semantics in a clear, precise way. The authors argue that neither mental models not metaphors provide a basis for designing and implementing such building blocks, but that visual formalisms do. They compare the benefits of mental models, metaphors and visual formalisms as the basis for designing the user interface, with particular attention to the practical solutions each provides to application developers | Dealing with Change: An Approach Using Non-functional Requirements. Non-functional requirements (or Quality Requirements, NFRs) such as confidentiality, performanceand timeliness are often crucial to a software system. Concerns for such NFRs are oftenthe impetus for change. To systematically support system evolution, this paper adapts the"NFR-Framework" which treats NFRs as goals to be achieved during development. Throughoutthe process, consideration of design alternatives, analysis of tradeoffs and rationalisationof design decisions are all carried out in ... | A comparison of multiprocessor task scheduling algorithms with communication costs Both parallel and distributed network environment systems play a vital role in the improvement of high performance computing. Of primary concern when analyzing these systems is multiprocessor task scheduling. Therefore, this paper addresses the challenge of multiprocessor task scheduling parallel programs, represented as directed acyclic task graph (DAG), for execution on multiprocessors with communication costs. Moreover, we investigate an alternative paradigm, where genetic algorithms (GAs) have recently received much attention, which is a class of robust stochastic search algorithms for various combinatorial optimization problems. We design the new encoding mechanism with a multi-functional chromosome that uses the priority representation-the so-called priority-based multi-chromosome (PMC). PMC can efficiently represent a task schedule and assign tasks to processors. The proposed priority-based GA has show effective performance in various parallel environments for scheduling methods. | Hyperspectral data compression using sparse representation Due to all bands of hyperspectral data have the same imaging area, it is reasonable to believe that the dictionary can sparse represent one band may also represent the other bands sparsely. Based on this property, this paper presents a new compression frame for hyperspectral data using sparse representation, and a simplified algorithm under this frame is also proposed. The basic idea of the proposed algorithm is to sparse coding bands using the dictionary learned from one training band, and its innovation is that patches having the same spatial location of all bands are restricted to be represented using the same atoms. Experimental results based on OMP and K-SVD are provided, which reveal that this proposal has better performance than wavelet based compression algorithm at low bit rates. | 1.007411 | 0.008 | 0.007744 | 0.007333 | 0.006667 | 0.003716 | 0.002472 | 0.001549 | 0.000333 | 0 | 0 | 0 | 0 | 0 |
Determining the Capacity Parameters in PEE-Based Reversible Image Watermarking. In the existing prediction-error expansion (PEE)-based reversible image watermarking schemes, the capacity parameters are determined in a recursive manner until the payload is just accommodated. This class of methods requires many rounds of embedding iterations, especially when the payload is high, and therefore, is computationally inefficient. Moreover, when multiple capacity parameters need to b... | Low distortion transform for reversible watermarking. This paper proposes a low-distortion transform for prediction-error expansion reversible watermarking. The transform is derived by taking a simple linear predictor and by embedding the expanded prediction error not only into the current pixel but also into its prediction context. The embedding ensures the minimization of the square error introduced by the watermarking. The proposed transform introduces less distortion than the classical prediction-error expansion for complex predictors such as the median edge detector or the gradient-adjusted predictor. Reversible watermarking algorithms based on the proposed transform are analyzed. Experimental results are provided. | Efficient reversible watermarking based on adaptive prediction-error expansion and pixel selection. Prediction-error expansion (PEE) is an important technique of reversible watermarking which can embed large payloads into digital images with low distortion. In this paper, the PEE technique is further investigated and an efficient reversible watermarking scheme is proposed, by incorporating in PEE two new strategies, namely, adaptive embedding and pixel selection. Unlike conventional PEE which embeds data uniformly, we propose to adaptively embed 1 or 2 bits into expandable pixel according to the local complexity. This avoids expanding pixels with large prediction-errors, and thus, it reduces embedding impact by decreasing the maximum modification to pixel values. Meanwhile, adaptive PEE allows very large payload in a single embedding pass, and it improves the capacity limit of conventional PEE. We also propose to select pixels of smooth area for data embedding and leave rough pixels unchanged. In this way, compared with conventional PEE, a more sharply distributed prediction-error histogram is obtained and a better visual quality of watermarked image is observed. With these improvements, our method outperforms conventional PEE. Its superiority over other state-of-the-art methods is also demonstrated experimentally. | Reducing location map in prediction-based difference expansion for reversible image data embedding In this paper, we present a reversible data embedding scheme based on an adaptive edge-directed prediction for images. It is known that the difference expansion is an efficient data embedding method. Since the expansion on a large difference will cause a significant embedding distortion, a location map is usually employed to select small differences for expansion and to avoid overflow/underflow problems caused by expansion. However, location map bits lower payload capacity for data embedding. To reduce the location map, our proposed scheme aims to predict small prediction errors for expansion by using an edge detector. Moreover, to generate a small prediction error for each pixel, an adaptive edge-directed prediction is employed which adapts reasonably well between smooth regions and edge areas. Experimental results show that our proposed data embedding scheme for natural images can achieve a high embedding capacity while keeping the embedding distortion low. | Localized Lossless Authentication Watermark (LAW) A novel framework is proposed for lossless authentication watermarking of images which allows authentication and recovery of original images without any distortions. This overcomes a significant limitation of traditional authentication watermarks that irreversibly alter image data in the process of watermarking and authenticate the watermarked image rather than the original. In particular, authenticity is verified before full reconstruction of the original image, whose integrity is inferred from the reversibility of the watermarking procedure. This reduces computational requirements in situations when either the verification step fails or the zero-distortion reconstruction is not required. A particular instantiation of the framework is implemented using a hierarchical authentication scheme and the lossless generalized-LSB data embedding mechanism. The resulting algorithm, called localized lossless authentication watermark (LAW), can localize tampered regions of the image; has a low embedding distortion, which can be removed entirely if necessary; and supports public/private key authentication and recovery options. The effectiveness of the framework and the instantiation is demonstrated through examples. | Low Complexity, High Efficiency Probability Model for Hyper-spectral Image Coding This paper describes a low-complexity, high-efficiency lossy-to-lossless coding scheme for hyper-spectral images. Together with only a 2D wavelet transform on individual image components, the proposed scheme achieves coding performance similar to that achieved by a 3D transform strategy that adds one level of wavelet decomposition along the depth axis of the volume. The proposed schemes operates by means of a probability model for symbols emitted by the bit plane coding engine. This probability model captures the statistical behavior of hyper-spectral images with high precision. The proposed method is implemented in the core coding system of JPEG2000 reducing computational costs by 25%. | Digital watermarking robust to geometric distortions. In this paper, we present two watermarking approaches that are robust to geometric distortions. The first approach is based on image normalization, in which both watermark embedding and extraction are carried out with respect to an image normalized to meet a set of predefined moment criteria. We propose a new normalization procedure, which is invariant to affine transform attacks. The resulting watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. The second approach is based on a watermark resynchronization scheme aimed to alleviate the effects of random bending attacks. In this scheme, a deformable mesh is used to correct the distortion caused by the attack. The watermark is then extracted from the corrected image. In contrast to the first scheme, the latter is suitable for private watermarking applications, where the original image is necessary for watermark detection. In both schemes, we employ a direct-sequence code division multiple access approach to embed a multibit watermark in the discrete cosine transform domain of the image. Numerical experiments demonstrate that the proposed watermarking schemes are robust to a wide range of geometric attacks. | Localized image watermarking based on feature points of scale-space representation This paper proposes a novel method for content-based watermarking based on feature points of an image. At each feature point, the watermark is embedded after scale normalization according to the local characteristic scale. Characteristic scale is the maximum scale of the scale-space representation of an image at the feature point. By binding watermarking with the local characteristics of an image, resilience against affine transformations can be obtained easily. Experimental results show that the proposed method is robust against various image processing steps including affine transformations, cropping, filtering and JPEG compression. | Optimal prefix codes for sources with two-sided geometric distributions A complete characterization of optimal prefix codes for off-centered, two-sided geometric distributions of the integers is presented. These distributions are often encountered in lossless image compression applications, as probabilistic models for image prediction residuals. The family of optimal codes described is an extension of the Golomb codes, which are optimal for one-sided geometric distributions. The new family of codes allows for encoding of prediction residuals at a complexity similar to that of Golomb codes, without recourse to the heuristic approximations frequently used when modifying a code designed for nonnegative integers so as to apply to the encoding of any integer. Optimal decision rules for choosing among a lower complexity subset of the optimal codes, given the distribution parameters, are also investigated, and the relative redundancy of the subset with respect to the full family of optimal codes is bounded | ACE: building interactive graphical applications | Training personal robots using natural language instruction As domestic robots become pervasive, uninitiated users will need a way to instruct them to adapt to their particular needs. The authors are designing a practical system that uses natural language to instruct a vision-based robot. | Representation of object-oriented data models | Software engineering-as it is This paper presents a view of software engineering as it is in 1979. It discusses current software engineering practice with respect to lessons learned in the past few years, and concludes that the lessons are currently not heeded roughly half of the time. The paper discusses some of the factors which may account for this lag, including rapid technological change, education shortfalls, technology transfer inhibitions, resistance to disciplined methods, inappropriate role models, and a restricted view of software engineering.
The paper also updates a 1976 state of the art survey of software engineering technology, including such topics as requirements and specifications, design, programming, verification and validation, maintenance, software psychology, and software economics. It concludes that the field is making solid progress, but that it is growing more complex at a faster rate than we can put it in order. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.203883 | 0.022964 | 0.007622 | 0.001632 | 0.000858 | 0.00044 | 0.000175 | 0.000064 | 0.000003 | 0 | 0 | 0 | 0 | 0 |
Distributed Moving-Horizon Estimation with Arrival-Cost Consensus The paper deals with the problem of estimating the state of a linear system over a peer-to peer network of linear sensors. The proposed approach is fully distributed, scalable, and allows for taking into account constraints on noise and state variables by resorting to the moving-horizon estimation paradigm. Each network node computes its local state estimate by minimizing a cost function defined over a sliding window of fixed size. The cost function includes a fused arrival cost which is computed in a distributed way by performing a consensus on the local arrival costs. The proposed estimator guarantees stability of the estimation error dynamics in all network nodes, under the minimal requirements of network connectivity and collective observability, and for any number of consensus steps. Numerical simulations are provided to demonstrate the practical effectiveness of the approach. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
An ontological knowledge-based system for the selection of process monitoring and analysis tools Efficient process monitoring and analysis tools provide the means for automated supervision and control of manufacturing plants and therefore play an important role in plant safety, process control and assurance of end product quality. The availability of a large number of different process monitoring and analysis tools for a wide range of operations has made their selection a difficult, time consuming and challenging task. Therefore, an efficient and systematic knowledge base coupled with an inference system is necessary to support the optimal selection of process monitoring and analysis tools, satisfying the process and user constraints. A knowledge base consisting of the process knowledge as well as knowledge on measurement methods and tools has been developed. An ontology has been designed for knowledge representation and management. The developed knowledge base has a dual feature. On the one hand, it facilitates the selection of proper monitoring and analysis tools for a given application or process. On the other hand, it permits the identification of potential applications for a given monitoring technique or tool. An efficient inference system based on forward as well as reverse search procedures has been developed to retrieve the data/information stored in the knowledge base. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Complete LKF approach to stabilization for linear systems with time-varying input delay This paper is concerned with stability analysis and stabilization for linear time-delay systems with interval time-varying input delay. By using an augmented complete Lyapunov–Krasovskii functional (LKF) and introducing appropriate terms in dealing with the positiveness of the LKF, we establish new stability and stabilization criteria in terms of linear matrix inequalities (LMIs). The present method leads to some significant improvements over existing results. Moreover, the main feature of this work lies in that the present results are applicable for time-delay systems with unstable delay-free case. Three numerical examples are given to show the effectiveness and merits of the present results. | Quantized Observer-Based Sliding Mode Control for Networked Control Systems Via the Time-Delay Approach. This paper investigates the sliding mode control problem for networked control systems, which are influenced by the non-ideal network environment, such as network-induced delays, packet dropouts and quantization errors. The states of the system are assumed to be unavailable, and an observer is designed to estimate the state of the system, based on which a sliding mode controller is given to guarantee the closed-loop system to be stable. Furthermore, it is shown that the proposed control scheme ensures the reachability of the sliding surfaces in both the state estimate space and the estimation error space. Finally, a numerical example is given to illustrated the effectiveness of the proposed methodology. | Stability Analysis of Distributed Delay Neural Networks Based on Relaxed Lyapunov-Krasovskii Functionals. This paper revisits the problem of asymptotic stability analysis for neural networks with distributed delays. The distributed delays are assumed to be constant and prescribed. Since a positive-definite quadratic functional does not necessarily require all the involved symmetric matrices to be positive definite, it is important for constructing relaxed Lyapunov-Krasovskii functionals, which generally lead to less conservative stability criteria. Based on this fact and using two kinds of integral inequalities, a new delay-dependent condition is obtained, which ensures that the distributed delay neural network under consideration is globally asymptotically stable. This stability criterion is then improved by applying the delay partitioning technique. Two numerical examples are provided to demonstrate the advantage of the presented stability criteria. | Free-Matrix-Based Integral Inequality for Stability Analysis of Systems With Time-Varying Delay The free-weighting matrix and integral-inequality methods are widely used to derive delay-dependent criteria for the stability analysis of time-varying-delay systems because they avoid both the use of a model transformation and the technique of bounding cross terms. This paper presents a new integral inequality, called a free-matrix-based integral inequality, that further reduces the conservativeness in those methods. It includes well-known integral inequalities as special cases. Using it to investigate the stability of systems with time-varying delays yields less conservative delay-dependent stability criteria, which are given in terms of linear matrix inequalities. Two numerical examples demonstrate the effectiveness and superiority of the method. | Formal Derivation of Strongly Correct Concurrent Programs. Summary A method is described for deriving concurrent programs which are consistent with the problem specifications and free from
deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of
synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant
and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences
associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary
variables is also given. The applicability of the techniques presented is discussed through various examples; their use for
verification purposes is illustrated as well. | A mathematical perspective for software measures research Basic principles which necessarily underlie software measures research are analysed. In the prevailing paradigm for the validation of software measures, there is a fundamental assumption that the sets of measured documents are ordered and that measures should report these orders. The authors describe mathematically, the nature of such orders. Consideration of these orders suggests a hierarchy of software document measures, a methodology for developing new measures and a general approach to the analytical evaluation of measures. They also point out the importance of units for any type of measurement and stress the perils of equating document structure complexity and psychological complexity | Distributed snapshots: determining global states of distributed systems This paper presents an algorithm by which a process in a distributed system determines a global state of the system during a computation. Many problems in distributed systems can be cast in terms of the problem of detecting global states. For instance, the global state detection algorithm helps to solve an important class of problems: stable property detection. A stable property is one that persists: once a stable property becomes true it remains true thereafter. Examples of stable properties are “computation has terminated,” “ the system is deadlocked” and “all tokens in a token ring have disappeared.” The stable property detection problem is that of devising algorithms to detect a given stable property. Global state detection can also be used for checkpointing. | ACE: building interactive graphical applications | Duality in specification languages: a lattice-theoretical approach A very general lattice-based language of commands, based on theprimitive operations of substitution and test for equality, isconstructed. This base language permits unbounded nondeterminism,demonic and angelic nondeterminism. A dual language permitting miraclesis constructed. Combining these two languages yields an extended baselanguage which is complete, in the sense that all monotonic predicatetransformers can be constructed in it. The extended base languageprovides a unifying framework for various specification languages; weshow how two Dijkstra-style specification languages can be embedded init.—Authors' Abstract | Abstract Syntax and Semantics of Visual Languages The effective use of visual languages requires a precise understanding of their meaning. Moreover, it is impossible to prove properties of visual languages like soundness of transformation rules or correctness results without having a formal language definition. Although this sounds obvious, it is surprising that only little work has been done about the semantics of visual languages, and even worse, there is no general framework available for the semantics specification of different visual languages. We present such a framework that is based on a rather general notion of abstract visual syntax. This framework allows a logical as well as a denotational approach to visual semantics, and it facilitates the formal reasoning about visual languages and their properties. We illustrate the concepts of the proposed approach by defining abstract syntax and semantics for the visual languages VEX, Show and Tell and Euler circles. We demonstrate the semantics in action by proving a rule for visual reasoning with Euler circles and by showing the correctness of a Show and Tell program. | Foundations of 4Thought 4Thought, a prototype design tool, is based on the notion that design artifacts are complex, formal, mathematical objects that require complementary textual and graphical views to be adequately comprehended. This paper describes the combined use of Entity- Relationship modelling and GraphLog to bridge the textual and graphical views. These techniques are illustrated by an example that is formally specified in Z Notation. | S/NET: A High-Speed Interconnect for Multiple Computers This paper describes S/NET (symmetric network), a high-speed small area interconnect that supports effective multiprocessing using message-based communication. This interconnect provides low latency, bounded contention time, and high throughput. It further provides hardware support for low level flow control and signaling. The interconnect is a star network with an active switch. The computers connect to the switch through full duplex fiber links. The S/NET provides a simple memory addressable interface to the processors and appears as a logical bus interconnect. The switch provides fast, fair, and deterministic contention resolution. It further supports high priority signals to be sent unimpeded in presence of data traffic (this can viewed as equivalent to interrupts on a conventional memory bus). The initial implementation supports a mix of VAX computers and Motorola 68000 based single board computers up to a maximum of 12. The switch throughput is 80 Mbits/s and the fiber links operate at a data rate of 10 Mbits/s. The kernel-to-kernel latency is only100 mus. We present a description of the architecture and discuss the performance of current systems. | Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.2 | 0.2 | 0.014286 | 0.001613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
An open system framework for component-based CNC machines This paper describes a framework for open, component-based, manufacturing controllers. The framework is based on the analysis of computer numerically controlled (CNC) machines. The framework includes a control class hierarchy, plug-and-play modules aggregated from the class hierarchy, and a model of collaboration. The framework can be used to build applications that range from a single-axis device to a multi-arm robot. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Leader-following consensus for networked multi-teleoperator systems via stochastic sampled-data control. In this paper, the leader-following consensus problem is investigated for a networked multi-teleoperator system (NMTS) under a stochastic sampled-data controller. By utilizing the input delay approach, the sampling period is transformed into a time-varying yet bounded delay. With the help of algebraic graph theory and probability theory, a new consensus protocol is designed. Subsequently, a sufficient condition for the leader-following consensus of NMTS is derived in the form of linear matrix inequality (LMI) by constructing an appropriate Lyapunov-Krasovskii functional and by utilizing some matrix and integral inequality techniques. Based on the derived condition, the design method of the desired sampled-data controller is also obtained in terms of the solution to LMI which can be checked effectively by using available software. In addition, the effectiveness of the obtained theoretical results is illustrated by a numerical example. | On Exploring the Domain of Attraction for Bilateral Teleoperator Subject to Interval Delay and Saturated P + d Control Scheme. The domain of attraction problem is investigated for networked teleoperation system subject to actuator saturation. The forward and backward time-varying communication delays are assumed to be interval and asymmetric, which is the case for network-based teleoperation system. We propose a novel Lyapunov-Krasovskii functional for the closed-loop teleoperation system with the consideration of the interval values of the time delays. The delay-dependent estimation of the domain of attraction is presented using linear matrix inequality (LMI) technique. The problem of designing P + d control law such that the domain of attraction is enlarged is formulated and solved as an optimization problem with LMI constraints. Experiments are performed to verify the effectiveness of the proposed approach. | Stability of linear systems with general sawtooth delay It is well known that in many particular systems, the upper bound on a certain time-varying delay that preserves the stability may be higher than the corresponding bound for the constant delay. Moreover, sometimes oscillating delays improve the performance (Michiels, W., Van Assche, V. & Niculescu, S. (2005) Stabilization of time-delay systems with a controlled time-varying delays and applications... | Asynchronous Output-Feedback Control of Networked Nonlinear Systems With Multiple Packet Dropouts: T–S Fuzzy Affine Model-Based Approach This paper investigates the problem of robust output-feedback control for a class of networked nonlinear systems with multiple packet dropouts. The nonlinear plant is represented by Takagi-Sugeno (T-S) fuzzy affine dynamic models with norm-bounded uncertainties, and stochastic variables that satisfy the Bernoulli random binary distribution are adopted to characterize the data-missing phenomenon. The objective is to design an admissible output-feedback controller that guarantees the stochastic stability of the resulting closed-loop system with a prescribed disturbance attenuation level. It is assumed that the plant premise variables, which are often the state variables or their functions, are not measurable so that the controller implementation with state-space partition may not be synchronous with the state trajectories of the plant. Based on a piecewise quadratic Lyapunov function combined with an S-procedure and some matrix inequality convexifying techniques, two different approaches to robust output-feedback controller design are developed for the underlying T-S fuzzy affine systems with unreliable communication links. The solutions to the problem are formulated in the form of linear matrix inequalities (LMIs). Finally, simulation examples are provided to illustrate the effectiveness of the proposed approaches. | Finite-time H∞ fuzzy control of nonlinear Markovian jump delayed systems with partly uncertain transition descriptions. This paper addresses a finite-time H∞ fuzzy control problem for a class of nonlinear Markovian jump delayed systems with partly uncertain transition descriptions, which is represented as a Takagi–Sugeno (T–S) fuzzy model. A new homogeneous polynomial of partly uncertain transition rates is chosen. Free-matrix-based and double integral forms of the Wirtinger-based integral inequalities are employed to make the proposed approach less conservative. Then sufficient conditions are derived such that the fuzzy nonlinear Makovian jump delayed system exhibits stochastic finite-time boundedness. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design methodology. | Pre-specified performance based model reduction for time-varying delay systems in fuzzy framework This paper attempts to provide a new solution to the model approximation problem for dynamic systems with time-varying delays under the fuzzy framework. For a given high-order system, our focus is on the construction of a reduced-order model, which approximates the original one in a prescribed error performance level and guarantees the asymptotic stability of the corresponding error system. Based on the reciprocally convex technique, a less conservative stability condition is established for the dynamic error system with a given error performance index. Furthermore, the reduced-order model is eventually obtained by applying the projection approach, which converts the model approximation into a sequential minimization problem subject to linear matrix inequality constraints by employing the cone complementary linearization algorithm. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed method. | Brief Robust control of uncertain distributed delay systems with application to the stabilization of combustion in rocket motor chambers The problems of robust stability and robust stabilization of uncertain linear systems with distributed delay occurring in the state variables are studied in this paper. The essential requirement for the uncertainties is that they are norm-bounded with known bounds. Conditions for the robust stability of distributed time delay systems are given and a design method for the robust stabilizing control law of the uncertain systems is presented. The proposed method is applied to the stabilization of combustion in the chamber of a liquid monopropellant rocket motor. It is found that the combustion can be robustly stabilized when the two parameters pressure exponent γ and maximal time lag r vary in specified intervals, respectively. | Exponential stability of impulsive systems with application to uncertain sampled-data systems We establish exponential stability of nonlinear time-varying impulsive systems by employing Lyapunov functions with discontinuity at the impulse times. Our stability conditions have the property that when specialized to linear impulsive systems, the stability tests can be formulated as Linear Matrix Inequalities (LMIs). Then we consider LTI uncertain sampled-data systems in which there are two sources of uncertainty: the values of the process parameters can be unknown while satisfying a polytopic condition and the sampling intervals can be uncertain and variable. We model such systems as linear impulsive systems and we apply our theorem to the analysis and state-feedback stabilization. We find a positive constant which determines an upper bound on the sampling intervals for which the stability of the closed loop is guaranteed. The control design LMIs also provide controller gains that can be used to stabilize the process. We also consider sampled-data systems with constant sampling intervals and provide results that are less conservative than the ones obtained for variable sampling intervals. | Automated consistency checking of requirements specifications This article describes a formal analysis technique, called consistency checking, for automatic detection of errors, such as type errors, nondeterminism, missing cases, and circular definitions, in requirements specifications. The technique is designed to analyze requirements specifications expressed in the SCR (Software Cost Reduction) tabular notation. As background, the SCR approach to specifying requirements is reviewed. To provide a formal semantics for the SCR notation and a foundation for consistency checking, a formal requirements model is introduced; the model represents a software system as a finite-state automation which produces externally visible outputs in response to changes in monitored environmental quantities. Results of two experiments are presented which evaluated the utility and scalability of our technique for consistency checking in real-world avionics application. The role of consistency checking during the requirements phase of software development is discussed. | Verifying properties of parallel programs: an axiomatic approach An axiomatic method for proving a number of properties of parallel programs is presented. Hoare has given a set of axioms for partial correctness, but they are not strong enough in most cases. This paper defines a more powerful deductive system which is in some sense complete for partial correctness. A crucial axiom provides for the use of auxiliary variables, which are added to a parallel program as an aid to proving it correct. The information in a partial correctness proof can be used to prove such properties as mutual exclusion, freedom from deadlock, and program termination. Techniques for verifying these properties are presented and illustrated by application to the dining philosophers problem. | Cell Modeling Using Agent-Based Formalisms The systems biology community is building increasingly complex models and simulations of cells and other biological entities. This community is beginning to look at alternatives to traditional representations such as those provided by ordinary differential equations (ODE). Making use of the object-oriented (OO) paradigm, the Unified Modeling Language (UML) and Real-time Object-Oriented Modeling (ROOM) visual formalisms, we describe a simple model that includes membranes with lipid bilayers, multiple compartments including a variable number of mitochondria, substrate molecules, enzymes with reaction rules, and metabolic pathways. We demonstrate the validation of the model by comparison with Gepasi and comment on the reusability of model components. | Visualization of Path Expressions in a Virtual Object-Oriented Database Query Language Path expressions have been accepted for concisely manipulating the nested structures in complex object-oriented query expressions. However, previous visual query languages hardly represent such query expressions in a concise and intuitive way partly due to improper visual representation of path expressions and partly due to lack of well-defined syntax and semantics of languages. In this paper, we present visual modeling of path expressions in a visual object-oriented database query language called Visual Object-Oriented Query Language (VOQL) which has excellent expressive power for sets, simple and intuitive syntax, and well-defined semantics. This is enabled by explicitlySpecifying the semantics of multi-valued path expressions based on the visual notation capable of representing set relationships in addition to functional relationships. The basic visual constructs called blobs and nested blobs denote sets of objects that path expressions represent while the constructs called binding edges and flattening edges visually simulate the notions of variable binding and dot functions in path expressions respectively. Based on the constructs, the grammer of VOQL defines the syntactic components while the semantics of query expressions are provided by syntax-directed translation to the counterparts in the extended relational calculus. Also, the visual constructs allow modeling of restricted universal quantification with a visual scoping box and effectively represent nested quantification and recursive queries without semantic ambiguities. An explicit specification of the semantics of multi-valued path expressions in a concise and unified visual notation is new and visually clarifies the semanticsof quantified queries in the nested structures. | Evolution of legal statements on the web In this paper we propose to study the evolution of legal statements that can be found in Web sites. Legal statements are an important part of each Web site because they can be seen as a contract between the owner of the site and its users. For example, a sitepsilas privacy policy explains what kind of data is collected from users by the operator and how it is processed. Operators use terms of use to put restrictions on the conduct of users. In this paper we describe our proposal for a research agenda and methodology that analyzes the evolution of legal statements on the Web. The research agenda argues that studying the content of legal statements and how they change over time allows to analyze and understand the evolution of the Web from different viewpoints. Specifically, changing legal statements allow to identify emerging legal developments, to expose shifting business objectives, and to track the balance of power between operators and users. Our suggested methodology proposes to obtain historical snapshots of Web sites available in the Internet Archive, to group them into different classes, and to analyze the content of the legal document as well as to compute metrics such as size and readability scores. The obtained data can then be used to formulate hypotheses about the evolution of certain characteristics of the Web. We discuss a pilot study that instantiates our methodology. This study is based on five snapshots of 15 different Web sites, and it shows that the methodology is feasible and can generate meaningful results. | NLP-Based Classifiers to Generalize Expert Assessments in E-Reputation Online Reputation ManagementORM is currently dominated by expert abilities. One of the great challenges is to effectively collect annotated training samples, especially to be able to generalize a small pool of expert feedback from area scale to a more global scale. One possible solution is to use advanced Machine Learning ML techniques, to select annotations from training samples, and propagate effectively and concisely. We focus on the critical issue of understanding the different levels of annotations. Using the framework proposed by the RepLab contest we present a considerable number of experiments in Reputation Monitoring and Author Profiling. The proposed methods rely on a large variety of Natural Language Processing NLP methods exploiting tweet contents and some background contextual information. We show that simple algorithms only considering tweets content are effective against state-of-the-art techniques. | 1.112 | 0.025 | 0.010339 | 0.0015 | 0.000462 | 0.000087 | 0.000019 | 0.000003 | 0 | 0 | 0 | 0 | 0 | 0 |
Multiband Lossless Compression of Hyperspectral Images Hyperspectral images exhibit significant spectral correlation, whose exploitation is crucial for compression. In this paper, we investigate the problem of predicting a given band of a hyperspectral image using more than one previous band. We present an information-theoretic analysis based on the concept of conditional entropy, which is used to assess the available amount of correlation and the pot... | Remote-Sensing Image Compression Using Two-Dimensional Oriented Wavelet Transform In this paper, a 2-D oriented wavelet transform (OWT) is introduced for efficient remote-sensing image compression. The proposed 2-D OWT can perform integrative oriented transform in arbitrary direction and achieve a significant transform coding gain. To maximize the transform coding gain, two separable 1-D transforms are implemented in the same direction for local areas with direction consistency. Subpixel interpolation rules are designed for rectangular subbands generation. In addition, semidirection displacement is adjusted to handle direction mismatch after the first 1-D transform. Experimental results demonstrate that the proposed 2-D OWT compression scheme outperforms JPEG2000 for remote-sensing images with high resolution, up to 0.43 dB in peak signal-to-noise ratio (PSNR), 0.0261 in the measure of structural similarity, 0.44% in Kappa coefficients, respectively, and significant subjective improvement. Meanwhile, it outperforms JPEG2000, previous adaptive directional lifting and weighted adaptive lifting methods, up to 1.98, 0.36, and 0.19 dB in PSNR for natural images. Furthermore, it is suitable for real-time remote-sensing processing for its low computational cost. | Nonlinear Unmixing of Hyperspectral Images Using a Generalized Bilinear Model Nonlinear models have recently shown interesting properties for spectral unmixing. This paper studies a generalized bilinear model and a hierarchical Bayesian algorithm for unmixing hyperspectral images. The proposed model is a generalization not only of the accepted linear mixing model but also of a bilinear model that has been recently introduced in the literature. Appropriate priors are chosen for its parameters to satisfy the positivity and sum-to-one constraints for the abundances. The joint posterior distribution of the unknown parameter vector is then derived. Unfortunately, this posterior is too complex to obtain analytical expressions of the standard Bayesian estimators. As a consequence, a Metropolis-within-Gibbs algorithm is proposed, which allows samples distributed according to this posterior to be generated and to estimate the unknown model parameters. The performance of the resulting unmixing strategy is evaluated via simulations conducted on synthetic and real data. | Nonlinear Elastic Model for Flexible Prediction of Remotely Sensed Multitemporal Images While an increasing number of satellite images are collected over a regular period in order to provide regular spatiotemporal information on land-use and land-cover changes, there are very few compression schemes in remotely sensed imagery that use historical data as a reference. Just as individual images can be compressed for separate transmission by taking into account their inherent spatial and spectral redundancies, the temporal redundancy between images of the same scene can also be exploited for sequential transmission. In this letter, we propose a nonlinear elastic method based on the general relationship to predict adaptively the current image from a previous reference image without any loss of information. The main feature of the developed method is to find the best prediction for each pixel brightness value individually using its own conditional probabilities to the previous image, instead of applying a single linear or nonlinear model. A codebook is generated to record the nonlinear point-to-point relationship. This temporal lossless compression is incorporated with spatial- and spectral-domain predictions, and the performances are compared with those of the JPEG2000 standard. The experimental results show an improved performance by more than 5%. | Statistical Atmospheric Parameter Retrieval Largely Benefits From Spatial-Spectral Image Compression. The infrared atmospheric sounding interferometer (IASI) is flying on board of the Metop satellite series, which is part of the EUMETSAT Polar System. Products obtained from IASI data represent a significant improvement in the accuracy and quality of the measurements used for meteorological models. Notably, the IASI collects rich spectral information to derive temperature and moisture profiles, amo... | Distributed Lossless Coding Techniques for Hyperspectral Images In this paper, we present a novel distributed coding scheme for lossless, progressive and low complexity compression of hyperspectral images. Hyperspectral images have several unique requirements that are vastly different from consumer images. Among them, lossless compression, progressive transmission, and low complexity onboard processing are three most prominent ones. To satisfy these requirements, we design a distributed coding scheme that shifts the complexity of data decorrelation to the decoder side to achieve lightweight onboard processing after image acquisition. At the encoder, the images are subsampled in order to facilitate successive encoding and progressive transmission. At the decoder, we generate the side information with adaptive region-based predictor by taking full advantage of the decoded subsampled images and previously decoded neighboring bands based on the assumptions that the objects appearing in different bands are highly correlated. The proposed progressive transmission via subsampling enables the spectral correlation to be refined successively, resulting in gradually improved decoding performance of higher-resolution layers as more sub-images are decoded. Experimental results on the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data demonstrate that the proposed scheme is able to achieve competitive compression performance comparing with the-stateof- the-art 3D schemes, including existing distributed source coding (DSC) schemes. The proposed scheme has even lower encoding complexity than that of the conventional 2D schemes. | Lossy-to-Lossless Compression of Hyperspectral Imagery Using Three-Dimensional TCE and an Integer KLT An embedded lossy-to-lossless coder for hyperspectral images is presented. The proposed coder couples a reversible integer-valued Karhunen-Loeve transform with an extension into 3-D of the tarp-based coding with classification for embedding (TCE) algorithm that was originally developed for lossy coding of 2-D images. The resulting coder obtains lossy-to-lossless operation while closely matching th... | An Operational Approach to PCA+JPEG2000 Compression of Hyperspectral Imagery Lossy-compression algorithms typically adopted for hyperspectral remote-sensing imagery-such as JPEG2000-usually produce a monotonically increasing signal-to-noise ratio (SNR) for increasing bitrate. Consequently, it is a common philosophy to employ as large a bitrate as possible so as to obtain the highest achievable SNR. However, it has been observed previously that a higher SNR may not necessarily correspond to better performance at data-analysis tasks, such as classification, anomaly detection, or linear unmixing. Considered specifically is the coupling of JPEG2000 with principal component analysis for spectral decorrelation such that only a few principal components are retained, and, for this compression paradigm, a technique to determine an operational bitrate is proposed with the aim of preserving both the majority of information in a dataset as well as its anomalous pixels. This operational bitrate may be much less than the largest bitrate that the system can allow. Experimental results show that classification and unmixing applied to reconstructed data after compression at this operational bitrate result in performance that is the same as or better than that achieved at higher bitrates; meanwhile, removal and lossless storage of anomalies prior to compression results in their perfect preservation in the reconstructed dataset. | Context modeling for near-lossless image coding This letter describes a context-based entropy coding suitable for any causal spatial differential pulse code modulation (DPCM) scheme performing lossless or near-lossless image coding. The proposed method is based on partitioning of prediction errors into homogeneous classes before arithmetic coding. A context function is measured on prediction errors lying within a two-dimensional (2-D) causal neighborhood, comprising the prediction support of the current pixel, as the root mean square (RMS) of residuals weighted by the reciprocal of their Euclidean distances. Its effectiveness is demonstrated in comparative experiments concerning both lossless and near-lossless coding. The proposed context coding/decoding is strictly real-time. | Stepwise Refinement of Action Systems A method for the formal development of provably correct parallel algorithms by stepwise refinement is presented. The entire derivation procedure is carried out in the context of purely sequential programs. The resulting parallel algorithms can be efficiently executed on different architectures. The methodology is illustrated by showing the main derivation steps in a construction of a parallel algorithm for matrix multiplication. | On the Secrecy Capacity of Fading Channels We consider the secure transmission of information over an ergodic fading channel in the presence of an eavesdropper. Our eavesdropper can be viewed as the wireless counterpart of Wyner's wiretapper. The secrecy capacity of such a system is characterized under the assumption of asymptotically long coherence intervals. We first consider the full channel state information (CSI) case, where the transmitter has access to the channel gains of the legitimate receiver and the eavesdropper. The secrecy capacity under this full CSI assumption serves as an upper bound for the secrecy capacity when only the CSI of the legitimate receiver is known at the transmitter, which is characterized next. In each scenario, the perfect secrecy capacity is obtained along with the optimal power and rate allocation strategies. We then propose a low-complexity on/off power allocation strategy that achieves near-optimal performance with only the main channel CSI. More specifically, this scheme is shown to be asymptotically optimal as the average signal-to-noise ratio (SNR) goes to infinity, and interestingly, is shown to attain the secrecy capacity under the full CSI assumption. Overall, channel fading has a positive impact on the secrecy capacity and rate adaptation, based on the main channel CSI, is critical in facilitating secure communications over slow fading channels. | On formal aspects of electronic (or digital) commerce: examples of research issues and challenges The notion of electronic or digital commerce is gaining widespread popularity. By and large, these developments are being led by industry and government, with academic research following these trends in the form of empirical and economic research. Much more fundamental improvements to (global) commerce are possible, but are presently being overlooked for lack of adequate formal theories, representations and tools. This paper attempts to incite research in these directions. | Program Construction by Parts . Given a specification that includes a number of user requirements,we wish to focus on the requirements in turn, and derive a partlydefined program for each; then combine all the partly defined programsinto a single program that satisfies all the requirements simultaneously.In this paper we introduces a mathematical basis for solving this problem;and we illustrate it by means of a simple example.1 Introduction and MotivationWe propose a program construction method whereby, given a... | Trading Networks with Bilateral Contracts. We consider general networks of bilateral contracts that include supply chains. We define a new stability concept, called trail stability, and show that any network of bilateral contracts has a trail-stable outcome whenever agents' preferences satisfy full substitutability. Trail stability is a natural extension of chain stability, but is a stronger solution concept in general contract networks. Trail-stable outcomes are not immune to deviations of arbitrary sets of firms. In fact, we show that outcomes satisfying an even more demanding stability property -- full trail stability -- always exist. We pin down conditions under which trail-stable and fully trail-stable outcomes have a lattice structure. We then completely describe the relationships between all stability concepts. When contracts specify trades and prices, we also show that competitive equilibrium exists in networked markets even in the absence of fully transferrable utility. The competitive equilibrium outcome is trail-stable. | 1.00915 | 0.009352 | 0.008914 | 0.008571 | 0.007143 | 0.003838 | 0.002464 | 0.00111 | 0.000064 | 0 | 0 | 0 | 0 | 0 |
A tolerant JPEG-LS image compressor foreseeing COTS FPGA implementation. Study of a compact solution for onboard tolerant image compression.Low-complexity JPEG-LS image compression standard allows considering medium-size flash or antifuse FPGAs for future use in small satellites.Widespread TMR and Hamming code plus scrubbing selected to mitigate error accumulation, considering a LEO space radiation environment.Evaluation of the effectiveness of the mitigation strategy by using a simulation-based susceptibility analysis method.Results pointed out two orders-of-magnitude reduction in the susceptibility estimate and enough room for improvements. A compact solution for onboard tolerant image compression is studied and the effectiveness of the soft-error mitigation strategy is evaluated by using a simulation-based susceptibility analysis method. The low complexity JPEG-LS compression algorithm allows considering medium-size flash or antifuse COTS FPGAs as a target for future use in small satellites. Fault mitigation methods, like Triple Modular Redundancy and Hamming code, with scrubbing to mitigate residual error accumulation, were selected taking into account operation in LEO space missions. The results point out the viability of implementing a tolerant image compression system in a single device with two orders-of-magnitude reduction in the susceptibility estimate based on a non-tolerant reference VHDL code. The effectiveness of the mitigation strategy, the injection model accuracy and possible improvements are discussed herein. Display Omitted | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Performance Comparison Between NOMA and OMA Relaying Protocols in Multi-Hop Networks over Nakagami-m Fading Channels under Impact of Hardware Impairments In this paper, we evaluate and compare performance of multi-hop relaying (MR) protocols under impact of hardware impairments, in terms of outage probability (OP) and throughput (TP). By applying non-orthogonal multiple access (NOMA) technique at each hop, the end-to-end data rate/throughput of the MR protocol can be enhanced, as compared with the conventional one. Particularly, the transmitter at each hop combines two signals, and forwards the combined signal to the receiver which uses successive interference cancelation (SIC) to extract the data. For performance evaluation and comparison, we derive exact closed-form expressions of OP and TP for the considered protocols over Nakagami-m channel. Monte Carlo simulations are then performed to verify the theoretical derivations. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Process Design engineering a Methodology for Real-time Software development This paper describes the Process Design Methodology, a disciplined engineering approach to development of a Real-Time Software Process. The approach described is part of an overall software research thrust, sponsored by the BMD Advanced Technology Center, which is directed at resolving fundamental problems of excessive cost, failure to meet schedules, and inadequate performance associated with the specification, design, implementation, and testing of BMD software processes. | The Software Development System This paper presents a discussion of the Software Development System (SDS), a methodology addressing the problems involved in the development of software for Ballistic Missile Defense systems. These are large, real-time, automated systems with a requirement for high reliability. The SDS is a broad approach attacking problems arising in requirements generation, software design, coding, and testing. The approach is highly requirements oriented and has resulted in the formulation of structuring concepts, a requirements statement language, process design language, and support software to be used throughout the development cycle. This methodology represents a significant advance in software technology for the development of software for a class of systems such as BMD. The support software has been implemented and is undergoing evaluation. | Software requirements: Are they really a problem? Do requirements arise naturally from an obvious need, or do they come about only through diligent effort—and even then contain problems? Data on two very different types of software requirements were analyzed to determine what kinds of problems occur and whether these problems are important. The results are dramatic: software requirements are important, and their problems are surprisingly similar across projects. New software engineering techniques are clearly needed to improve both the development and statement of requirements. | A Requirements Engineering Methodology for Real-Time Processing Requirements This paper describes a methodology for the generation of software requirements for large, real-time unmanned weapons systems. It describes what needs to be done, how to evaluate the intermediate products, and how to use automated aids to improve the quality of the product. An example is provided to illustrate the methodology steps and their products and the benefits. The results of some experimental applications are summarized. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming. | The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism. | A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section. | Modelling information flow for organisations: A review of approaches and future challenges. Modelling is a classic approach to understanding complex problems that can be achieved diagrammatically to visualise concepts, and mathematically to analyse attributes of concepts. An organisation as a communicating entity is a made up of constructs in which people can have access to information and speak to each other. Modelling information flow for organisations is a challenging task that enables analysts and managers to better understand how to: organise and coordinate processes, eliminate redundant information flows and processes, minimise the duplication of information and manage the sharing of intra- and inter-organisational information. | From Action Systems to Modular Systems Action systems are used to extend program refinement methods for sequential programs, as described in the refinement calculus, to parallel and reactive system refinement. They provide a general description of reactive systems, capable of modeling terminating, possibly aborting and infinitely repeating systems. We show how to extend the action system model to refinement of modular systems. A module may export and import variables, it may provide access procedures for other modules, and it may itself access procedures of other modules. Modules may have autonomous internal activity and may execute in parallel or in sequence. Modules may be nested within each other. They may communicate by shared variables, shared actions, a generalized form of remote procedure calls and by persistent data structures. Both synchronous and asynchronous communication between modules is supported. The paper shows how a single framework can be used for both the specification of large systems, the modular decomposition of the system into smaller units and the refinement of the modules into program modules that can be described in a standard programming language and executed on standard hardware. | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.2 | 0.012121 | 0.01 | 0.001835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
EPIC: Context Adaptive Lossless Light Field Compression using Epipolar Plane Images This paper proposes extensions of CALIC for lossless compression of light field (LF) images. The overall prediction process is improved by exploiting the linear structure of Epipolar Plane Images (EPI) in a slope based prediction scheme. The prediction is improved further by averaging predictions made using horizontal and verticals EPIs. Besides this, the difference in these predictions is included in the error energy function, and the texture context is redefined to improve the overall compression ratio. The results using the proposed method shows significant bitrate-savings in comparison to standard lossless coding schemes and offers significant reduction in computational complexity in comparison to the state-of-the-art compression schemes. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A new, fast, and efficient image codec based on set partitioning in hierarchical trees Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code | Reversible Implementations Of Irreversible Component Transforms And Their Comparisons In Image Compression Reversible color component transforms derived by the LU factorization are briefly described. It is possible to obtain an reversible implementation to a given component transform, even if the original transform is irreversible. Some examples are presented and their performances are compared in image compression. | Mutual information-based context quantization Context-based lossless coding suffers in many cases from the so-called context dilution problem, which arises when, in order to model high-order statistic dependencies among data, a large number of contexts is used. In this case the learning process cannot be fed with enough data, and so the probability estimation is not reliable. To avoid this problem, state-of-the-art algorithms for lossless image coding resort to context quantization (CQ) into a few conditioning states, whose statistics are easier to estimate in a reliable way. It has been early recognized that in order to achieve the best compression ratio, contexts have to be grouped according to a maximal mutual information criterion. This leads to quantization algorithms which are able to determine a local minimum of the coding cost in the general case, and even the global minimum in the case of binary-valued input. This paper surveys the CQ problem and provides a detailed analytical formulation of it, allowing to shed light on some details of the optimization process. As a consequence we find that state-of-the-art algorithms have a suboptimal step. The proposed approach allows a steeper path toward the cost function minimum. Moreover, some sufficient conditions are found that allow to find a globally optimal solution even when the input alphabet is not binary. Even though the paper mainly focuses on the theoretical aspects of CQ, a number of experiments to validate the proposed method have been performed (for the special case of segmentation map lossless coding), and encouraging results have been recorded. | Distributed source coding of hyperspectral images A first attempt to exploit distributed source coding (DSC) principles for the lossless compression of hyperspectral images is presented. The DSC paradigm is exploited to design a very light coder which minimizes the exploitation of the correlation between the image bands. In this way we managed to move the computational complexity from the encoder to the decoder, thus matching the needs of classical acquisition system where compression is achieved on board of the aerial platform and decoding at the ground station. Though the encoder does not explicitly exploit inter-band correlation, the achieved bit rate is about 1 bit/pixel lower than classical 2D schemes such as JPEG-LS or CALID 2D, and only about 1 b/p higher than the best performing, and much more complex, 3D schemes. | A Review of DNA Microarray Image Compression We review the state of the art in DNA micro array image compression. First, we describe the most relevant approaches published in the literature and classify them according to the stage of the typical image compression process where each approach makes its contribution. We then summarize the compression results reported for these specific-specific image compression schemes. In a set of experiments conducted for this paper, we obtain results for several popular image coding techniques, including the most recent coding standards. Prediction-based schemes CALIC and JPEG-LS, and JPEG2000 using zero wavelet decomposition levels are the best performing standard compressors, but are all outperformed by the best micro array-specific technique, Battiato's CNN-based scheme. | Image Compression Practices And Standards For Geospatial Information Systems Compression technology is becoming increasingly important in geospatial information systems. In this paper we address some of the most relevant compression issues for remote sensing applications, and highlight the potential benefits of the JPEG set of standards. In particular, we review the JPEG, JPEG 2000, and JPEG-LS compression standards, and the JPIP protocol for interactive image retrieval. Finally, we discuss the use of compressed-domain processing, along with the use of flexible Me formats for efficient storage and access to metadata. | Context modeling for near-lossless image coding This letter describes a context-based entropy coding suitable for any causal spatial differential pulse code modulation (DPCM) scheme performing lossless or near-lossless image coding. The proposed method is based on partitioning of prediction errors into homogeneous classes before arithmetic coding. A context function is measured on prediction errors lying within a two-dimensional (2-D) causal neighborhood, comprising the prediction support of the current pixel, as the root mean square (RMS) of residuals weighted by the reciprocal of their Euclidean distances. Its effectiveness is demonstrated in comparative experiments concerning both lossless and near-lossless coding. The proposed context coding/decoding is strictly real-time. | A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images Predictive coding is attractive for compression on board of spacecraft due to its low computational complexity, modest memory requirements, and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation, where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image to achieve the desired target rate while minimizing distortion. The rate control algorithm allows achieving lossy near-lossless compression and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper, we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows performing lossless, near-lossless, and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics, and is extremely competitive with respect to state-of-the-art transform coding. | Improving neural network approach to lossless image coding In the paper optimization of lossless image coders based on Adaptive Neural Networks (AdNN) is addressed. Firstly, a detailed analysis of the influence of AdNN parameters on coder performance (average bitrate, time complexity) is done. Secondly, an improved technique denoted AdNN+ is proposed. Its main features are introduction of contexts, variable training window size, and post-processing by NLMS algorithm. Experiments show that indeed, the new method is better than others based on neural networks, and that it can even compete with the best existing image lossless coding algorithms. | An iterative template matching algorithm using the Chirp-Z transform for digital image watermarking INTRODUCTIONThe popularity of the World Wide Web has clearly demonstrated the commercial potential of the digital multimediamarket. Unfortunately however, digital networks and multimedia also afford virtually unprecedented opportunitiesto pirate copyrighted material. As a result, digital image watermarking has become a an active area of research.Techniques for hiding watermarks have grown steadily more sophisticated and increasingly robust to standard imageprocessing techniques. Current... | Qualitative simulation Qualitative simulation is a key inference process in qualitative causal reasoning. However, the precise meaning of the different proposals and their relation with differential equations is often unclear. In this paper, we present a precise definition of qualitative structure and behavior descriptions as abstractions of differential equations and continuously differentiable functions. We present a new algorithm for qualitative simulation that generalizes the best features of existing algorithms, and allows direct comparisons among alternate approaches. Starting with a set of constraints abstracted from a differential equation, we prove that the OSIM algorithm is guaranteed to produce a qualitative behavior corresponding to any solution to the original equation. We also show that any qualitative simulation algorithm will sometimes produce spurious qualitative behaviors: ones which do not correspond to any mechanism satisfying the given constraints. These observations suggest specific types of care that must be taken in designing applications of qualitative causal reasoning systems, and in constructing and validating a knowledge base of mechanism descriptions. | Software testing based on formal specifications: a theory and a tool This paper addresses the problem of constructing test data sets from formal specifications. Starting from a notion of an ideal exhaustive test data set which is derived from the notion of satisfaction of the formal specification, it is shown how to select by refinements a practicable test set, i.e. computable, not rejecting correct programs (unbiased), and accepting only correct programs (valid), assuming some hypothe- ses. The hypotheses play an important role: they formalize common test practices and they express the gap between the success of the test and correctness ; the size of the test set depends on the strength of the hypotheses. The paper shows an application of this theory in the case of algebraic specifi- cations and presents the actual procedures used to mechanically produce such test sets, using Horn clause logic. These procedures are embedded in an interactive sys- tem which, given some general hypotheses schemes and an algebraic specification, produces a test set and the corresponding hypotheses. | Modeling Cooperative Work Processes - A Multiple Perspectives Framework This article presents a framework, concepts, and notations for modeling cooperative work from different perspectives. The framework is based on the basic notions information, task, and actor, which are modeled individually and in relation to each other. The Cooperation Modeling Technique (CMT) built on these concepts provides notations for representing the different aspects of cooperative work. This results in a flexible approach for representing different analysis perspectives pertaining to the design of cooperation support systems. The method aims at providing abstractions and mechanisms that are targeted not only at structured aspects of work processes but particularly at unstructured and less formalizable cooperation issues.The main focus of the approach is currently on asynchronous forms of cooperation, addressing informational awareness for asynchronous processes. The framework, however, also encompasses synchronous types of cooperation. The development of concrete methods for these aspects is a focus of future work. | Use of symmetry in prediction-error field for lossless compression of 3D MRI images Abstract Three dimensional MRI images which are powerful tools for diagnosis of many diseases require large storage space. A number of lossless compression schemes exist for this purpose. In this paper we propose a new approach for lossless compression of these images which exploits the inherent symmetry that exists in 3D MRI images. First, an efficient pixel prediction scheme is used to remove correlation between pixel values in an MRI image. Then a block matching routine is employed to take advantage of the symmetry within the prediction error image. Inter-slice correlations are eliminated using another block matching. Results of the proposed approach are compared with the existing standard compression techniques. | 1.003034 | 0.004848 | 0.004407 | 0.004231 | 0.003253 | 0.002122 | 0.001119 | 0.000779 | 0.000317 | 0.000007 | 0 | 0 | 0 | 0 |
Research on Knowledge-Based Software Environments at Kestrel Institute We present a summary of the CHI project conducted at Kestrel Institute through mid-1984. The objective of this project was to perform research on knowledge-based software environments. Toward this end, key portions of a prototype environment, called CHI, were built that established the feasibility of this approach. One result of this research was the development of a wide-spectrum language that could be used to express all stages of the program development process in the system. Another result was that the prototype compiler was used to synthesize itself from very-high-level description of itself. In this way the system was bootstrapped. We describe the overall nature of the work done on this project, give highlights of implemented prototypes, and describe the implications that this work suggests for the future of software engineering. In addition to this historical perspective, current research projects at Kestrel Institute as well as commercial applications of the technology at Reasoning Systems are briefly surveyed. | Templar: a knowledge-based language for software specifications using temporal logic A software specification language Templar is defined in this article. The development of the language was guided by the following objectives: requirements specifications written in Templar should have a clear syntax and formal semantics, should be easy for a systems analyst to develop and for an end-user to understand, and it should be easy to map them into a broad range of design specifications. Templar is based on temporal logic and on the Activity-Event-Condition-Activity model of a rule which is an extension of the Event-Condition-Activity model in active databases. The language supports a rich set of modeling primitives, including rules, procedures, temporal logic operators, events, activities, hierarchical decomposition of activities, parallelism, and decisions combined together into a cohesive system. | Expert Systems and Software Enginnering: Ready for Marriage? | Distributed Intelligent Agents In Retsina, the authors have developed a distributed collection of software agents that cooperate asynchronously to perform goal-directed information retrieval and integration for supporting a variety of decision-making tasks. Examples for everyday organizational decision making and financial portfolio management demonstrate its effectiveness. | Meta-rules: Reasoning about control How can we insure that knowledge embedded in a program is applied effectively? Traditionally the answer to this question has been sought in different problem solving paradigms and in different approaches to encoding and indexing knowledge. Each of these is useful with a certain variety of problem, but they all share a common problem: they become ineffective in the face of a sufficiently large knowledge base. How then can we make it possible for a system to continue to function in the face of a very large number of plausibly useful chunks of knowledge? | Semantic Interoperability - Context, Issues and Research Directions An increasing dependence and cooperation between organisations has created a need for many enterprises to access remote as well as local information sources. Thus, it becomes important to be able to interconnect existing, heterogeneous information systems. One form of heterogeneity is semantic heterogeneity, which occurs when there is a disagreement regarding the interpretation and intended use of related information, or when the same phenomenon in a Universe of Discourse is modelled in different ways in two systems. In this paper, we survey the basic problems caused by semantic heterogeneity and suggest a number of research directions that address these problems. | On formal aspects of electronic (or digital) commerce: examples of research issues and challenges The notion of electronic or digital commerce is gaining widespread popularity. By and large, these developments are being led by industry and government, with academic research following these trends in the form of empirical and economic research. Much more fundamental improvements to (global) commerce are possible, but are presently being overlooked for lack of adequate formal theories, representations and tools. This paper attempts to incite research in these directions. | Qualitative simulation Qualitative simulation is a key inference process in qualitative causal reasoning. However, the precise meaning of the different proposals and their relation with differential equations is often unclear. In this paper, we present a precise definition of qualitative structure and behavior descriptions as abstractions of differential equations and continuously differentiable functions. We present a new algorithm for qualitative simulation that generalizes the best features of existing algorithms, and allows direct comparisons among alternate approaches. Starting with a set of constraints abstracted from a differential equation, we prove that the OSIM algorithm is guaranteed to produce a qualitative behavior corresponding to any solution to the original equation. We also show that any qualitative simulation algorithm will sometimes produce spurious qualitative behaviors: ones which do not correspond to any mechanism satisfying the given constraints. These observations suggest specific types of care that must be taken in designing applications of qualitative causal reasoning systems, and in constructing and validating a knowledge base of mechanism descriptions. | Requirements engineering in 2001: (virtually) managing a changing reality Trends in society and technology force requirements engineering to expand its role from a one-shot activity in the development process to a virtual image that accompanies the changing reality of a system. A maturing software market also requires a better understanding of the differentiation in market segments for requirements engineering and standardisation of methodologies within these segments. On the research side, this requires a coherent perspective of hitherto parallel research directions towards a comprehensive understanding of requirements processes, as well as the optimal exploitation of new technologies that support the main role of requirements engineering; mutual learning of all stakeholders concerned | The Three Dimensions of Requirements Engineering Requirements engineering (RE) is perceived as an area of growing im- portance. Due to the increasing effort spent for research in this area many con- tributions to solve different problems within RE exist. The purpose of this paper is to identify the main goals to be reached during the requirements engineering process in order to develop a framework for RE. This framework consists of the three dimensions: | A Formal Foundation for Distributed Workflow Execution Based on State Charts This paper provides a formal foundation for distributed workflow executions. The state chart formalism is adapted to the needs of a workflow model in order to establish a basis for both correctness reasoning and run-time support for complex and large-scale workflow applications. To allow for the distributed execution of a workflow across different workflow servers, which is required for scalability and organizational decentralization, a method for the partitioning of workflow specifications is developed. It is proven that the partitioning preserves the original state chart's behavior. | Business Process Modeling Process modeling and workflow applications have become more an more important during last decade. The main reason for this increased interest is the need to provide computer aided system integration of the enterprise based on its business processes. This need requires a technology that enables to integrate modeling, simulation and enactment of processes into one single package. The primary focus of all tools is to describe the way how activities are ordered in time. This kind of partially ordered steps shows how the output of one activity can serve as the input to another one. But there is also another aspect of the business process that has to be involved --where the activities are executed. The spatial aspect of the process enactment represents a new dimension in the process engineering discipline. It is also important to understand that not just process enactment but also the early phases of process specification have to cope with this spatial aspect. The paper is going to discuss how all these above mentioned principles can be integrated together and how the standard approach in process specification might be extended with the spatial dimension to make business process models more natural and understandable. | Evaluation of JPEG-LS, the new lossless and controlled-lossy still image compression standard, for compression of high-resolution elevation data The compression of elevation data is studied. The performance of JPEG-LS, the new international ISO/ITU standard for lossless and near-lossless (controlled-lossy) still-image compression, is investigated both for data from the USGS digital elevation model (DEM) database and the navy-provided digital terrain model (DTM) data. Using JPEG-LS has the advantage of working with a standard algorithm. Moreover, in contrast with algorithms like the popular JPEG-lossy standard, this algorithm permits the completely lossless compression of the data as well as a controlled lossy mode where a sharp upper bound on the elevation error is selected by the user. All these are achieved at a very low computational complexity. In addition to these algorithmic advantages, they show that JPEG-LS achieves significantly better compression results than those obtained with other (nonstandard) algorithms previously investigated for the compression of elevation data. The results here reported suggest that JPEG-LS can immediately be adopted for the compression of elevation data for a number of applications | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1.015172 | 0.013868 | 0.012333 | 0.011161 | 0.011161 | 0.011161 | 0.005589 | 0.003723 | 0.001595 | 0.00007 | 0.000002 | 0 | 0 | 0 |
A generalized knowledge-based system for the recognition of unconstrained handwritten numerals A method of recognizing unconstrained handwritten numerals using a knowledge base is proposed. Features are collected from a training set and stored in a knowledge base that is used in the recognition stage. Recognition is accomplished by either an inference process or a structural method. The scheme is general, flexible, and applicable to different methods of feature extraction and recognition. By changing the acceptance parameters, a continuous range of performance can be achieved. Encouraging results on nearly 17000 totally unconstrained handwritten numerals are presented. The performance of the system under different recognition-rejection tradeoff ratios is analyzed in detail | A note on human recognition of hand-printed characters | Handwritten alphanumeric character recognition by the neocognitron A pattern recognition system which works with the mechanism of the neocognitron, a neural network model for deformation-invariant visual pattern recognition, is discussed. The neocognition was developed by Fukushima (1980). The system has been trained to recognize 35 handwritten alphanumeric characters. The ability to recognize deformed characters correctly depends strongly on the choice of the training pattern set. Some techniques for selecting training patterns useful for deformation-invariant recognition of a large number of characters are suggested. | Recognition of Roads in an Urban Map by Using the Topological Road-Network | Structural classification and relaxation matching of totally unconstrained handwritten zip-code numbers A system for recognizing totally unconstrained handwritten numerals is described. It comprises a feature extractor and two classification algorithms. The feature extractor decomposes the skeleton of a character into geometric primitives containing topological information of the character. These primitives consist of convex polygons and line segments, and features are generated from each primitive. The recognition process contains a fast structural classifier that identifies the majority of the samples, and a robust relaxation algorithm which classifies the rest of the data. The system was trained and tested on real-life handwritten ZIP codes. | Numeral Recognition by Weighting Local Decisions This paper presents a new technique to improve thecombination of classification decisions obtained fromlocal analysis of patterns. Specifically, a geneticalgorithm is used to determine the optimal weight vectorto balance the local decisions in the combination process.The experimental results, carried out in the field ofhand-written numeral recognition, demonstrate theeffectiveness of the new technique. | Tuning between Exponential Functions and Zones for Membership Functions Selection in Voronoi-Based Zoning for Handwritten Character Recognition In Handwritten Character Recognition, zoning is rigtly considered as one of the most effective feature extraction techniques. In the past, many zoning methods have been proposed, based on static and dynamic zoning design strategies. Notwithstanding, little attention has been paid so far to the role of function-zone membership functions, that define the way in which a feature influences different zones of the pattern. In this paper the effectiveness of membership functions for zoning-based classification is investigated. For the purpose, a useful representation of zoning methods based on Voronoi Diagram is adopted and several membership functions are considered, according to abstract -- , ranked- and measurement-levels strategies. Furthermore, a new class of membership functions with adaptive capabilities is introduced and a real-coded genetic algorithm is proposed to determine both the optimal zoning and the adaptive membership functions most profitable for a given classification problem. The experimental tests, carried out in the field of handwritten digit recognition, show the superiority of adaptive membership functions compared to traditional functions, whatever zoning method is used. | Class-based n-gram models of natural language We address the problem of predicting a word from previous words in a sample of text. In particular, we discuss n-gram models based on classes of words. We also discuss several statistical algorithms for assigning words to classes based on the frequency of their co-occurrence with other words. We find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings, depending on the nature of the underlying statistics. | Edge-directed prediction for lossless compression of natural images This paper sheds light on the least-square (LS)-based adaptive prediction schemes for lossless compression of natural images. Our analysis shows that the superiority of the LS-based adaptation is due to its edge-directed property, which enables the predictor to adapt reasonably well from smooth regions to edge areas. Recognizing that LS-based adaptation improves the prediction mainly around the edge areas, we propose a novel approach to reduce its computational complexity with negligible performance sacrifice. The lossless image coder built upon the new prediction scheme has achieved noticeably better performance than the state-of-the-art coder CALIC with moderately increased computational complexity | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | A taxonomy for real-world modelling concepts A major component in problem analysis is to model the real world itself. However, the modelling languages suggested so far, suffer from several weaknesses, especially with respect to dynamics . First, dynamic modelling languages originally aimed at describing data—rather than real-world—processes. Moreover, they are either weak in expression, so that models become too vague to be meaningful, or they are cluttered with rigorous detail, which makes modelling unnecessarily complicated and inhibits the communication with end users. This paper establishes a simple and intuitive conceptual basis for the modelling of the real world, with an emphasis on dynamics. Object-orientation is not considered appropriate for this purpose, due to its focus on static object structure. Dataflow diagrams, on the other hand, emphasize dynamics, but unfortunately, some major conceptual deficiencies make DFDs, as well as their various formal extensions, unsuited for real-world modelling. This paper presents a taxonomy of concepts for real-world modelling which rely on some seemingly small, but essential modifications of the DFD language, Hence the well-known, communication-oriented diagrammatic representations of DFDs can be retained. It is indicated how the approach can support a smooth transition into later stages of object-oriented design and implementation. | Refinement and Continuous Behaviour Refinement Calculus is a formal framework for the development of provably correct software. It is used by Action Systems, a predicate transformer based framework for constructing distributed and reactive systems. Recently, Action Systems were extended with a new action called the differential action. It allows the modelling of continuous behaviour, such that Action Systems may model hybrid systems. In this paper we investigate how the differential action fits into the refinement framework. As the main result we develop simple laws for proving a refinement step involving continuous behaviour within the Refinement Calculus. | A taxonomy for the early stages of the software development life cycle Most researchers in the software engineering community use the term “requirements” to describe the initial stage of software development, and they define requirements to be a process of describing what , not how . However, the range of tools and techniques that are currently sold as requirements tools and techniques extends from aids for analysts asking potential customers appropriate questions about an existent problem to aids for defining algorithms for software modules. This paper presents a taxonomy of the early stages of the software development life cycle to enable prospective tool and technique users to understand what they are buying and to enable future toolsmiths and technique developers to uniquely categorize and characterize their product in comparison with others. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.068522 | 0.067347 | 0.067347 | 0.067347 | 0.023758 | 0.000429 | 0.000288 | 0.00002 | 0 | 0 | 0 | 0 | 0 | 0 |
The Design of Free Structure Granular Mappings: The Use of the Principle of Justifiable Granularity The study introduces a concept of mappings realized in presence of information granules and offers a design framework supporting the formation of such mappings. Information granules are conceptually meaningful entities formed on a basis of a large number of experimental input–output numeric data available for the construction of the model. We develop a conceptually and algorithmically sound way of forming information granules. Considering the directional nature of the mapping to be formed, this directionality aspect needs to be taken into account when developing information granules. The property of directionality implies that while the information granules in the input space could be constructed with a great deal of flexibility, the information granules formed in the output space have to inherently relate to those built in the input space. The input space is granulated by running a clustering algorithm; for illustrative purposes, the focus here is on fuzzy clustering realized with the aid of the fuzzy C-means algorithm. The information granules in the output space are constructed with the aid of the principle of justifiable granularity (being one of the underlying fundamental conceptual pursuits of Granular Computing). The construct exhibits two important features. First, the constructed information granules are formed in the presence of information granules already constructed in the input space (and this realization is reflective of the direction of the mapping from the input to the output space). Second, the principle of justifiable granularity does not confine the realization of information granules to a single formalism such as fuzzy sets but helps form the granules expressed any required formalism of information granulation. The quality of the granular mapping (viz. the mapping realized for the information granules formed in the input and output spaces) is expressed in terms of the coverage criterion (articulating how well the experimental data are “covered” by information granules produced by the granular mapping for any input experimental data). Some parametric studies are reported by quantifying the performance of the granular mapping (expressed in terms of the coverage and specificity criteria) versus the values of a certain parameters utilized in the construction of output information granules through the principle of justifiable granularity. The plots of coverage–specificity dependency help determine a knee point and reach a sound compromise between these two conflicting requirements imposed on the quality of the granular mapping. Furthermore, quantified is the quality of the mapping with regard to the number of information granules (implying a certain granularity of the mapping). A series of experiments is reported as well. | Fuzzy clustering analysis for optimizing fuzzy membership functions Fuzzy model identification is an application of fuzzy inference system for identifying unknown functions, for a given set of sampled data. The most important thing for fuzzy identification task is to decide the parameters of membership functions (MFs) used in fuzzy systems. A lot of efforts (Chung and Lee, 1994; Jang, 1993; Sun and Jang, 1993) have been given to initialize the parameters of fuzzy membership functions. However, the problems of parameter identification were not solved formally. Assessments of these algorithms are discussed in the paper. Based on the fuzzy c-means (FCM) Bezdek (1987) clustering algorithm, we propose a heuristic method to calibrate the fuzzy exponent iteratively. A hybrid learning algorithm for refining the system parameters is then presented. Examples are demonstrated to show the effectiveness of the proposed method, comparing with the equalized universe method (EUM) and subtractive clustering method (SCM) Chiu (1994). The simulation results indicate the general applicability of our methods to a wide range of applications. | Collaborative clustering with the use of Fuzzy C-Means and its quantification In this study, we introduce the concept of collaborative fuzzy clustering-a conceptual and algorithmic machinery for the collective discovery of a common structure (relationships) within a finite family of data residing at individual data sites. There are two fundamental features of the proposed optimization environment. First, given existing constraints which prevent individual sites from exchanging detailed numeric data, any communication has to be realized at the level of information granules. The specificity of these granules impacts the effectiveness of ensuing collaborative activities. Second, the fuzzy clustering realized at the level of the individual data site has to constructively consider the findings communicated by other sites and act upon them while running the optimization confined to the particular data site. Adhering to these two general guidelines, we develop a comprehensive optimization scheme and discuss its two-phase character in which the communication phase of the granular findings intertwines with the local optimization being realized at the level of the individual site and exploits the evidence collected from other sites. The proposed augmented form of the objective function is essential in the navigation of the overall optimization that has to be completed on a basis of the data and available information granules. The intensity of collaboration is optimized by choosing a suitable tradeoff between the two components of the objective function. The objective function based clustering used here concerns the well-known Fuzzy C-Means (FCM) algorithm. Experimental studies presented include some synthetic data, selected data sets coming from the machine learning repository and the weather data coming from Environment Canada. | Design of information granule-oriented RBF neural networks and its application to power supply for high-field magnet To realize effective modeling and secure accurate prediction abilities of models for power supply for high-field magnet (PSHFM), we develop a comprehensive design methodology of information granule-oriented radial basis function (RBF) neural networks. The proposed network comes with a collection of radial basis functions, which are structurally as well as parametrically optimized with the aid of information granulation and genetic algorithm. The structure of the information granule-oriented RBF neural networks invokes two types of clustering methods such as K-Means and fuzzy C-Means (FCM). The taxonomy of the resulting information granules relates to the format of the activation functions of the receptive fields used in RBF neural networks. The optimization of the network deals with a number of essential parameters as well as the underlying learning mechanisms (e.g., the width of the Gaussian function, the numbers of nodes in the hidden layer, and a fuzzification coefficient used in the FCM method). During the identification process, we are guided by a weighted objective function (performance index) in which a weight factor is introduced to achieve a sound balance between approximation and generalization capabilities of the resulting model. The proposed model is applied to modeling power supply for high-field magnet where the model is developed in the presence of a limited dataset (where the small size of the data is implied by high costs of acquiring data) as well as strong nonlinear characteristics of the underlying phenomenon. The obtained experimental results show that the proposed network exhibits high accuracy and generalization capabilities. | From fuzzy data analysis and fuzzy regression to granular fuzzy data analysis This note offers some personal views on the two pioneers of fuzzy sets, late Professors Hideo Tanaka and Kiyoji Asai. The intent is to share some personal memories about these remarkable researchers and humans, highlight their long-lasting research accomplishments and stress a visible impact on the fuzzy set community.The note elaborates on new and promising research avenues initiated by fuzzy regression and identifies future developments of these models emerging within the realm of Granular Computing and giving rise to a plethora of granular fuzzy models and higher-order and higher-type granular constructs. | Description and classification of granular time series The study is concerned with a concept and a design of granular time series and granular classifiers. In contrast to the plethora of various models of time series, which are predominantly numeric, we propose to effectively exploit the idea of information granules in the description and classification of time series. The numeric (optimization-oriented) and interpretation abilities of granular time series and their classifiers are highlighted and quantified. A general topology of the granular classifier involving a formation of a granular feature space and the usage of the framework of relational structures (relational equations) in the realization of the classifiers is presented. A detailed design process is elaborated on along with a discussion of the pertinent optimization mechanisms. A series of experiments is covered leading to a quantitative assessment of the granular classifiers and their parametric analysis. | Multivariate modeling and type-2 fuzzy sets This paper explores the link between type-2 fuzzy sets and multivariate modeling. Elements of a space X are treated as observations fuzzily associated with values in a multivariate feature space. A category or class is likewise treated as a fuzzy allocation of feature values (possibly dependent on values in X). We observe that a type-2 fuzzy set on X generated by these two fuzzy allocations captures imprecision in the class definition and imprecision in the observations. In practice many type-2 fuzzy sets are in fact generated in this way and can therefore be interpreted as the output of a classification task. We then show that an arbitrary type-2 fuzzy set can be so constructed, by taking as a feature space a set of membership functions on X. This construction presents a new perspective on the Representation Theorem of Mendel and John. The multivariate modeling underpinning the type-2 fuzzy sets can also constrain realizable forms of membership functions. Because averaging operators such as centroid and subsethood on type-2 fuzzy sets involve a search for optima over membership functions, constraining this search can make computation easier and tighten the results. We demonstrate how the construction can be used to combine representations of concepts and how it therefore provides an additional tool, alongside standard operations such as intersection and subsethood, for concept fusion and computing with words. | List processing in real time on a serial computer A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical implementations of list processing systems lack this property because allocating a list cell from the heap may cause a garbage collection, which process requires time proportional to the heap size to finish. A real-time list processing system is presented which continuously reclaims garbage, including directed cycles, while linearizing and compacting the accessible cells into contiguous locations to avoid fragmenting the free storage pool. The program is small and requires no time-sharing interrupts, making it suitable for microcode. Finally, the system requires the same average time, and not more than twice the space, of a classical implementation, and those space requirements can be reduced to approximately classical proportions by compact list representation. Arrays of different sizes, a program stack, and hash linking are simple extensions to our system, and reference counting is found to be inferior for many applications. | Scikit-learn: Machine Learning in Python Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net. | Specifying dynamic support for collaborative work within WORLDS In this paper, we present a specification language developed for WORLDS, a next generation computer-supported collaborative work system. Our specification language, called Introspect, employs a meta-level architecture to allow run-time modifications to specifications. We believe such an architecture is essential to WORLDS' ability to provide dynamic support for collaborative work in an elegant fashion. | Refinement of State-Based Concurrent Systems The traces, failures, and divergences of CSP can be expressed as weakest precondition formulæ over action systems. We show how such systems may be refined up to failures-divergences, by giving two proof methods which are sound and jointly complete: forwards and backwards simulations. The technical advantage of our weakest precondition approach over the usual relational approach is in our simple handling of divergence; the practical advantage is in the fact that the refinement calculus for sequential programs may be used to calculate forwards simulations. Our methods may be adapted to state-based development methods such as VDM or Z. | Reasoning with Background Knowledge - A Three-Level Theory | Abstraction of objects by conceptual clustering Very bound to the logic of first-rate predicates, the formalism of conceptual graphs constitutes a knowledge representation language. The abstraction of systems presents several advantages. It helps to render complex systems more understandable, thus facilitating their analysis and their conception. Our approach of conceptual graphs abstraction, or conceptual clustering, is based on rectangular decomposition. It produces a set of clusters representing similarities between subsets of objects to be abstracted, organized into a hierarchy of classes: the Knowledge Space. Some conceptual clustering methods already exist. Our approach is distinguishable from other approaches in as far as it allows a gain in space and time. | MoMut::UML Model-Based Mutation Testing for UML | 1.101469 | 0.102939 | 0.102939 | 0.102939 | 0.102939 | 0.052204 | 0.020626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Consensus of Nonlinear Multiagent Systems With Aperiodic Intermittent Communications via Nonfragile Tracking Protocol In this article, the problem of nonfragile tracking protocol design for high-order multiagent systems with Lipschitz-type node dynamics is investigated. Considerations are that the network is subject to aperiodic intermittent communications and the in-neighboring agents’ interactions switch in a directed graphs set, in which each element contains a directed spanning tree. The zero-order holder is employed to keep the local information from in-neighboring agents as the network communications are out of action. By virtue of a proposed two-step switching mechanism, one equivalently casts the concerned consensus tracking issue into asymptotically stabilizing a class of uncertain switched time-delay systems. Taking advantage of algebraic graph theory, Lyapunov–Krasovskii stability analysis, and robust nonfragile control approach, it is proved that the nonfragile consensus tracking can be achieved if a group of linear matrix inequalities are feasible and, for each time interval, the communication rate is larger than a threshold value. Numerical examples demonstrate the effectiveness of the theoretical results. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A Pluralistic Knowledge-Based Approach to Software Specification We propose a pluralistic attitude to software specification, where multiple viewpoints/methods are integrated to enhance our understanding of the required system. In particular, we investigate how this process can be supported by heuristics acquired from well-known software specification methods such as Data Flow Diagrams, Petri Nets and Entity Relationship Models. We suggest the classification of heuristics by method and activity, and show how they can be formalised in Prolog. More general heuristics indicating complementarity consistency between methods are also formalised. A practical by-product has been the generation of "expert-assistance" to the integration of methods: PRISMA is a pluralistic knowledge-based system supporting the coherent construction of a software specification from multiple viewpoints. The approach is ilustrated via examples. Theoretical and practical issues related to specification processes and environments supporting a pluralistic paradigm are also discussed. | Static Analysis to Identify Invariants in RSML Specifications . Static analysis of formal, high-level specifications of safetycritical software can discover flaws in the specification that would escapeconventional syntactic and semantic analysis. As an example, specificationswritten in the Requirements State Machine Language (RSML)should be checked for consistency : two transitions out of the same statethat are triggered by the same event should have mutually exclusiveguarding conditions. The check uses only behavioral information that islocal to... | Distributed Intelligent Agents In Retsina, the authors have developed a distributed collection of software agents that cooperate asynchronously to perform goal-directed information retrieval and integration for supporting a variety of decision-making tasks. Examples for everyday organizational decision making and financial portfolio management demonstrate its effectiveness. | Generating, integrating, and activating thesauri for concept-based document retrieval A blackboard-based document management system that uses a neural network spreading-activation algorithm which lets users traverse multiple thesauri is discussed. Guided by heuristics, the algorithm activates related terms in the thesauri and converges of the most pertinent concepts. The system provides two control modes: a browsing module and an activation module that determine the sequence of operations. With the browsing module, users have full control over which knowledge sources to browse and what terms to select. The system's query formation; the retrieving, ranking and selection of documents; and thesaurus activation are described.<> | Document ranking and the vector-space model Efficient and effective text retrieval techniques are critical in managing the increasing amount of textual information available in electronic form. Yet text retrieval is a daunting task because it is difficult to extract the semantics of natural language texts. Many problems must be resolved before natural language processing techniques can be effectively applied to a large collection of texts. Most existing text retrieval techniques rely on indexing keywords. Unfortunately, keywords or index terms alone cannot adequately capture the document contents, resulting in poor retrieval performance. Yet keyword indexing is widely used in commercial systems because it is still the most viable way by far to process large amounts of text. Using several simplifications of the vector-space model for text retrieval queries, the authors seek the optimal balance between processing efficiency and retrieval effectiveness as expressed in relevant document rankings | Tolerant planning and negotiation in generating coordinated movement plans in an automated factory Plan robustness is important for real world applications where modelling imperfections often result in execution deviations. The concept of tolerant planning is suggested as one of the ways to build robust plans. Tolerant planning achieves this aim by being tolerant of an agent's own execution deviations. When applied to multi-agent domains, it has the additional characteristic of being tolerant of other agents' deviant behaviour. Tolerant planning thus defers dynamic replanning until execution errors become excessive. The underlying strategy is to provide more than ample resources for agents to achieve their goals. Such redundancies aggravate the resource contention problem. To counter this, the iterative negotiation mechanism is suggested. It requires agents to be skillful in negotiating with other agents to resolve conflicts in such a way as to minimize compromising one's own tolerances and yet being benevolent in helping others find a feasible plan. | Systematic Incremental Validation of Reactive Systems via Sound Scenario Generalization Validating the specification of a reactive system, such as a telephone switching system, traffic controller, or automated network service, is difficult, primarily because it is extremely hard even tostate a set of complete and correct requirements, let alone toprove that a specification satisfies them. In the ISAT project[10], end-user requirements are stated as concrete behavior scenarios, and a multi-functional apprentice system aids the human developer in acquiring and maintaining a specification consistent with the scenarios. ISAT's Validation Assistant (isat-va) embodies a novel, systematic, and incremental approach to validation based on the novel technique ofsound scenario generalization, which automatically states and proves validation lemmas. This technique enablesisat-va to organize the validity proof around a novel knowledge structure, thelibrary of generalized fragments, and provides automated progress tracking and semi-automated help in increasing proof coverage. The approach combines the advantages of software testing and automated theorem proving of formal requirements, avoiding most of their shortcomings, while providing unique advantages of its own. | O-O Requirements Analysis: an Agent Perspective In this paper, we present a formal object-oriented specification language designed for capturing requirements expressed on composite realtime systems. The specification describes the system as a society of 'agents', each of them being characterised (i) by its responsibility with respect to actions happening in the system and (ii) by its time-varying perception of the behaviour of the other agents. On top of the language, we also suggest some methodological guidance by considering a general strategy based on a progressive assignement cf responsibilities to agents. | Four dark corners of requirements engineering Research in requirements engineering has produced an extensive body of knowledge, but there are four areas in which the foundation of the discipline seems weak or obscure. This article shines some light in the "four dark corners," exposing problems and proposing solutions. We show that all descriptions involved in requirements engineering should be descriptions of the environment. We show that certain control information is necessary for sound requirements engineering, and we explain the close association between domain knowledge and refinement of requirements. Together these conclusions explain the precise nature of requirements, specifications, and domain knowledge, as well as the precise nature of the relationships among them. They establish minimum standards for what information should be represented in a requirements language. They also make it possible to determine exactly what it means for requirements engineering to be successfully completed. Categories and Subject Descriptors: D.2.1 (Software Engineering): Requirements/Specifica- tions—methodologies | A logic of action for supporting goal-oriented elaborations of requirements Constructing requirements specifications for a complex system is a quite difficult process. In this paper, we have focussed on the elaboration part of this process whete new requirements are progressively identified and incorporated in the requirements document. We propose a requirements specification language which, beyond the mere expression of requirements, also supports the elaboration step. This language is a Gist’s dialect where the concepts of goals and the one of agent characterized by some responsibility are identified. A formaliiation of this requirements language is proposed in terms of a non standard modal logic of actions. | Mapping design knowledge from multiple representations The requirements and specifications documents which initiate and control design and development projects typically use a variety of formal and informal notational systems. The goal of the research reported is to automatically interpret requirement documents expressed in a variety of notations and to integrate the interpretations in order to support requirements analysis and synthesis from them. Because the source notations include natural language, a form of semantic net called conceptual graphs is adopted as the intermediate knowledge representation for expressing interpretations and integrating them. The focus is to describe the interpretation or mapping of a few requirements notations to conceptual graphs, and to indicate the process of joining these interpretations | Towards a Reference Framework for Process Concepts This paper discusses the importance of process support for business activities.A reference framework for process concepts and technology supportis sought. The general requirements and properties of the process domainare first discussed. Then, four process sub-models are presented todescribe activities, products, tools and organisations, respectively. Fiveprocess model phases are also introduced, as well as meta-processes andrelated human roles to handle process models and their... | Fast Piecewise Linear Predictors For Lossless Compression Of Hyperspectral Imagery The work presented here deals with the design of predictors for the lossless compression of hyperspectral imagery. The large number of spectral bands that characterize hyperspectral imagery give it properties that can be exploited when performing compression. Specifically, in addition to the spatial correlation which is similar to all images, the large number of spectral bands means a high spectral correlation also. Lossless compression algorithms are typically divided into two stages, a decorrelation stage and a coding stage. This work deals with the design of predictors for the decorrelation stage which are both fast and good. Fast implies low complexity, which was achieved by having predictors with no multiplications, only comparisons and additions. Good means predictors that have performance close to the state of the art. To achieve this, both spectral and spatial correlations are used for the predictor. The performance of the developed predictors are compared to those in the most widely known algorithms, LOCO-I, used in JPEG-Lossless, and CALIC-Extended, the original version of which had the best compression performance of all the algorithms submitted to the JPEG-LS committee. The developed algorithms are shown to be much less complex than CALIC-Extended with better compression performance. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1.028114 | 0.023032 | 0.022418 | 0.022418 | 0.022418 | 0.022418 | 0.011814 | 0.008798 | 0.004542 | 0.000181 | 0.000005 | 0 | 0 | 0 |
A method for testing and validating executable statechart models Statecharts constitute an executable language for modelling event-based reactive systems. The essential complexity of statechart models solicits the need for advanced model testing and validation techniques. In this article, we propose a method aimed at enhancing statechart design with a range of techniques that have proven their usefulness to increase the quality and reliability of source code. The method is accompanied by a process that flexibly accommodates testing and validation techniques such as test-driven development, behaviour-driven development, design by contract, and property statecharts that check for violations of behavioural properties during statechart execution. The method is supported by the Sismic tool, an open-source statechart interpreter library in Python, which supports all the aforementioned techniques. Based on this tooling, we carry out a controlled user study to evaluate the feasibility, usefulness and adequacy of the proposed techniques for statechart testing and validation. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Spread Spectrum-Based Image Watermarking Resistant to Rotation and Scaling Using Radon Transform In this paper, a novel multibit image watermarking scheme resistant to rotation and scaling attacks is presented. The Radon transform is used to correct the orientation of an image. Then the spread spectrum-based watermarking scheme which is scaling invariant is used to embed and extract the watermark message in the DCT domain of the corrected image. Experimental results show that the scheme possesses good robustness against rotation, scaling attacks and considerable robustness against typical image processing. | Improved seam carving for video retargeting Video, like images, should support content aware resizing. We present video retargeting using an improved seam carving operator. Instead of removing 1D seams from 2D images we remove 2D seam manifolds from 3D space-time volumes. To achieve this we replace the dynamic programming method of seam carving with graph cuts that are suitable for 3D volumes. In the new formulation, a seam is given by a minimal cut in the graph and we show how to construct a graph such that the resulting cut is a valid seam. That is, the cut is monotonic and connected. In addition, we present a novel energy criterion that improves the visual quality of the retargeted images and videos. The original seam carving operator is focused on removing seams with the least amount of energy, ignoring energy that is introduced into the images and video by applying the operator. To counter this, the new criterion is looking forward in time - removing seams that introduce the least amount of energy into the retargeted result. We show how to encode the improved criterion into graph cuts (for images and video) as well as dynamic programming (for images). We apply our technique to images and videos and present results of various applications. | DAISY: an efficient dense descriptor applied to wide-baseline stereo. In this paper, we introduce a local image descriptor, DAISY, which is very efficient to compute densely. We also present an EM-based algorithm to compute dense depth and occlusion maps from wide-baseline image pairs using this descriptor. This yields much better results in wide-baseline situations than the pixel and correlation-based algorithms that are commonly used in narrow-baseline stereo. Also, using a descriptor makes our algorithm robust against many photometric and geometric transformations. Our descriptor is inspired from earlier ones such as SIFT and GLOH but can be computed much faster for our purposes. Unlike SURF, which can also be computed efficiently at every pixel, it does not introduce artifacts that degrade the matching performance when used densely. It is important to note that our approach is the first algorithm that attempts to estimate dense depth maps from wide-baseline image pairs, and we show that it is a good one at that with many experiments for depth estimation accuracy, occlusion detection, and comparing it against other descriptors on laser-scanned ground truth scenes. We also tested our approach on a variety of indoor and outdoor scenes with different photometric and geometric transformations and our experiments support our claim to being robust against these. | Robust video watermarking based on affine invariant regions in the compressed domain This paper proposes a novel robust video watermarking scheme based on local affine invariant features in the compressed domain. This scheme is resilient to geometric distortions and quite suitable for DCT-encoded compressed video data because it performs directly in the block DCTs domain. In order to synchronize the watermark, we use local invariant feature points obtained through the Harris-Affine detector which is invariant to affine distortions. To decode the frames from DCT domain to the spatial domain as fast as possible, a fast inter-transformation between block DCTs and sub-block DCTs is employed and down-sampling frames in the spatial domain are obtained by replacing each sub-blocks DCT of 2x2 pixels with half of the corresponding DC coefficient. The above-mentioned strategy can significantly save computational cost in comparison with the conventional method which accomplishes the same task via inverse DCT (IDCT). The watermark detection is performed in spatial domain along with the decoded video playing. So it is not sensitive to the video format conversion. Experimental results demonstrate that the proposed scheme is transparent and robust to signal-processing attacks, geometric distortions including rotation, scaling, aspect ratio changes, linear geometric transforms, cropping and combinations of several attacks, frame dropping, and frame rate conversion. | Real-Time Compressed- Domain Video Watermarking Resistance to Geometric Distortions A proposed real-time video watermarking scheme is transparent and robust to geometric distortions, including rotation with cropping, scaling, aspect ratio change, frame dropping, and swapping. | A New Digital Image Watermarking Algorithm Resilient to Desynchronization Attacks Synchronization is crucial to design a robust image watermarking scheme. In this paper, a novel feature-based image watermarking scheme against desynchronization attacks is proposed. The robust feature points, which can survive various signal-processing and affine transformation, are extracted by using the Harris-Laplace detector. A local characteristic region (LCR) construction method based on the scale-space representation of an image is considered for watermarking. At each LCR, the digital watermark is repeatedly embedded by modulating the magnitudes of discrete Fourier transform coefficients. In watermark detection, the digital watermark can be recovered by maximum membership criterion. Simulation results show that the proposed scheme is invisible and robust against common signal processing, such as median filtering, sharpening, noise adding, JPEG compression, etc., and desynchronization attacks, such as rotation, scaling, translation, row or column removal, cropping, and random bend attack, etc. | Digital watermarking robust to geometric distortions. In this paper, we present two watermarking approaches that are robust to geometric distortions. The first approach is based on image normalization, in which both watermark embedding and extraction are carried out with respect to an image normalized to meet a set of predefined moment criteria. We propose a new normalization procedure, which is invariant to affine transform attacks. The resulting watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. The second approach is based on a watermark resynchronization scheme aimed to alleviate the effects of random bending attacks. In this scheme, a deformable mesh is used to correct the distortion caused by the attack. The watermark is then extracted from the corrected image. In contrast to the first scheme, the latter is suitable for private watermarking applications, where the original image is necessary for watermark detection. In both schemes, we employ a direct-sequence code division multiple access approach to embed a multibit watermark in the discrete cosine transform domain of the image. Numerical experiments demonstrate that the proposed watermarking schemes are robust to a wide range of geometric attacks. | Global exponential stability in Lagrange sense for inertial neural networks with time-varying delays. In this paper, the global exponential stability in Lagrange sense related to inertial neural networks with time-varying delay is investigated. Firstly, by constructing a proper variable substitution, the original system is transformed into the first order differential system. Next, some succinct criteria for the ultimate boundedness and global exponential attractive set are derived via the Lyapunov function method, inequality techniques and analytical method. Meanwhile, the detailed estimations for the global exponential attractive set are established. Finally, the effectiveness of theoretical results has been illustrated via two numerical examples. | Wirtinger-based integral inequality: Application to time-delay systems In the last decade, the Jensen inequality has been intensively used in the context of time-delay or sampled-data systems since it is an appropriate tool to derive tractable stability conditions expressed in terms of linear matrix inequalities (LMIs). However, it is also well-known that this inequality introduces an undesirable conservatism in the stability conditions and looking at the literature, reducing this gap is a relevant issue and always an open problem. In this paper, we propose an alternative inequality based on the Fourier Theory, more precisely on the Wirtinger inequalities. It is shown that this resulting inequality encompasses the Jensen one and also leads to tractable LMI conditions. In order to illustrate the potential gain of employing this new inequality with respect to the Jensen one, two applications on time-delay and sampled-data stability analysis are provided. | Distributed snapshots: determining global states of distributed systems This paper presents an algorithm by which a process in a distributed system determines a global state of the system during a computation. Many problems in distributed systems can be cast in terms of the problem of detecting global states. For instance, the global state detection algorithm helps to solve an important class of problems: stable property detection. A stable property is one that persists: once a stable property becomes true it remains true thereafter. Examples of stable properties are “computation has terminated,” “ the system is deadlocked” and “all tokens in a token ring have disappeared.” The stable property detection problem is that of devising algorithms to detect a given stable property. Global state detection can also be used for checkpointing. | Synthesis of concurrent systems with many similar processes Methods for synthesizing concurrent programs from temporal logicspecifications based on the use of a decision procedure for testingtemporal satisfiability have been proposed by Emerson and Clarkeand by Manna and Wolper. An important advantage of these synthesis methods is that they obviate the need to manually compose a program and manually construct a proof of its correctness. One only has to formulate a precise problem specification; the synthesis method then mechanically constructs acorrect solution. A serious drawback of these methods in practice,however, is that they suffer from the state explosion problem. Tosynthesize a concurrent system consisting of K sequential processes, each having N states in its local transition diagram, requiresconstruction of the global product-machine having about NKglobal states in general. This exponential growth in K makes it infeasible to synthesize systems composed of more than 2 or 3processes. In this article, we show how to synthesize concurrentsystems consisting of many (i.e., a finite but arbitrarily largenumber K of) similar sequential processes. Our approach avoids construction of the global product-machine for K processes; instead, it constructs a two-process product-machine for a single pair of generic sequential processes. The method is uniform in K, providing a simple template that can be instantiated for each process to yield a solution for any fixed K. The method is also illustrated on synchronization problems from the literature. | On formal aspects of electronic (or digital) commerce: examples of research issues and challenges The notion of electronic or digital commerce is gaining widespread popularity. By and large, these developments are being led by industry and government, with academic research following these trends in the form of empirical and economic research. Much more fundamental improvements to (global) commerce are possible, but are presently being overlooked for lack of adequate formal theories, representations and tools. This paper attempts to incite research in these directions. | Action systems in incremental and aspect-oriented modeling Action systems were first introduced as an execution model that made it possible to model distributed systems at a high level of abstraction and to refine these models into implementation descriptions. This paper describes how these initial ideas have developed into a formally based specification and design method for reactive systems, and discusses some related research. In this approach, TLA is adopted as programming logic, and action systems are extended with various language facilities. Systems are modeled as closed systems, which makes effective interface refinement possible. The associated design method is based on refinement by superposition, and the resulting specifications are layered structures, in which aspect-orientation is supported by the possibility to compose independent refinements of common base systems. A new kind of analysis of observations of concurrent executions shows that TLA-based fairness assumptions can be used also in high-level abstractions of distributed systems. | Dual-Clustering-Based Hyperspectral Band Selection by Contextual Analysis. Hyperspectral image (HSI) involves vast quantities of information that can help with the image analysis. However, this information has sometimes been proved to be redundant, considering specific applications such as HSI classification and anomaly detection. To address this problem, hyperspectral band selection is viewed as an effective dimensionality reduction method that can remove the redundant components of HSI. Various HSI band selection methods have been proposed recently, and the clustering-based method is a traditional one. This agglomerative method has been considered simple and straightforward, while the performance is generally inferior to the state of the art. To tackle the inherent drawbacks of the clustering-based band selection method, a new framework concerning on dual clustering is proposed in this paper. The main contribution can be concluded as follows: 1) a novel descriptor that reveals the context of HSI efficiently; 2) a dual clustering method that includes the contextual information in the clustering process; 3) a new strategy that selects the cluster representatives jointly considering the mutual effects of each cluster. Experimental results on three real-world HSIs verify the noticeable accuracy of the proposed method, with regard to the HSI classification application. The main comparison has been conducted among several recent clustering-based band selection methods and constraint-based band selection methods, demonstrating the superiority of the technique that we present. | 1.202304 | 0.202304 | 0.202304 | 0.202304 | 0.202304 | 0.067457 | 0.029234 | 0.000001 | 0 | 0 | 0 | 0 | 0 | 0 |
From diagnosis to diagnosability: axiomatization, measurement and application Classical views on testing and their associated testing models are not dealing with the question of fault repairing but only focus on fault detection. Diagnosis consists of determining the nature of a detected fault, of locating it and hopefully repairing it. Correlatively, the only standardized quality factors implied in the detection/repair aspects of software engineering are testability and maintainability: those quality factors are misleading since they do not pinpoint this question of the location/repairing effort, that can be identified under the concept of diagnosability. This paper is thus concerned with diagnosability, its definition and the axiomatization of its expected behavior. The paper aims at: • introducing and analysing diagnosability as a significant and complementary dimension of software testability, • producing a high-level definition and axiomatization of a diagnosability measurement generic enough to be adapted to various software paradigms: this property-based approach serves as a measurement "specification", independent on the application context and thus reusable, • detailing a diagnosability measure dedicated to data-flow software and especially test strategies impact on diagnosis and testing effort (from measure implementation to case study), • illustrating the reuse of the high-level axiomatization to the specific question of measuring the impact of assertions (or contracts for a designed by contract OO system) on diagnosis effort and preciseness.Throughout the paper, the concepts are illustrated on a case study provided by an industrial partner. At last, the reusability of the axiomatization is illustrated by proposing a measure of the impact of assertions (or contracts in a design by contract approach) on global software diagnosability. Main lessons concern both the diagnosability significance as a quality factor and the interest of an axiomatization-based methodology for building trustable software measurement. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
An optimized hierarchical encryption technique for tamper recognition Digital images contain sensitive information that needs to be watermarked for ownership authentication and copyright verification. It is crucial to have a method to detect and recognize tampering when images are sent over insecure channels. The watermarking scenario becomes more complicated when an intruder is precluded from obtaining a watermark signal from a watermarked image since it can expose future point-to-point correspondences. The proposed scheme utilizes a hierarchical strategy for improving the security of the semi-fragile watermarking scheme that requires fewer data to be exchanged before each transaction. Consequently, the proposed watermarking method obtains a trade-off between robustness and imperceptibility by using a meta-heuristic approach, namely, the Sine Cosine Algorithm (SCA). Furthermore, an Artificial Neural Network (ANN) is built using the Softmax classifier to recognize possible attacks that might be performed by an intruder. The whole scheme is presented in the form of a GUI with the attack recognition triggered from the receiver’s side. Experimental results show improved image quality metrics like PSNR, correlation coefficient, and structural similarity when the scaling factor used in the watermarking algorithm is optimized using SCA. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Derivation of algorithmic control structures in Event-B refinement. The Event-B formalism allows program specifications to be modelled at an abstract level and refined towards a concrete model. However, Event-B lacks explicit control flow structure and ordering is implicitly encoded in event guards. This makes it difficult to identify and apply rules for transformation of Event-B models to sequential code. This paper introduces a scheduling language to support the incremental derivation of algorithmic control structure for events as part of the Event-B refinement process. We provide intermediate control structures for non-deterministic iteration and choice that ease the transition from abstract specifications to sequential implementations. We present rules for transforming algorithmic structures to more concrete refinements. We illustrate our approach by applying our method to the Schorr–Waite graph marking algorithm. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Resolution scalable image coding with reversible cellular automata. In a resolution scalable image coding algorithm, a multiresolution representation of the data is often obtained using a linear filter bank. Reversible cellular automata have been recently proposed as simpler, nonlinear filter banks that produce a similar representation. The original image is decomposed into four subbands, such that one of them retains most of the features of the original image at a reduced scale. In this paper, we discuss the utilization of reversible cellular automata and arithmetic coding for scalable compression of binary and grayscale images. In the binary case, the proposed algorithm that uses simple local rules compares well with the JBIG compression standard, in particular for images where the foreground is made of a simple connected region. For complex images, more efficient local rules based upon the lifting principle have been designed. They provide compression performances very close to or even better than JBIG, depending upon the image characteristics. In the grayscale case, and in particular for smooth images such as depth maps, the proposed algorithm outperforms both the JBIG and the JPEG2000 standards under most coding conditions. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Functional design of a menu-tree interface within structured system development | Developing interactive information systems with the User Software Engineering methodology User Software Engineering is a methodology, supported by automated tools, for the systematic development of interactive information systems. The USE methodology gives particular attention to effective user involvement in the early stages of the software development process, concentrating on external design and the use of rapidly created and modified prototypes of the user interface. The USE methodology is supported by an integrated set of graphically based tools. This paper describes the User Software Engineering methodology and the tools that support the methodology. | On Overview of KRL, a Knowledge Representation Language | Formal Derivation of Strongly Correct Concurrent Programs. Summary A method is described for deriving concurrent programs which are consistent with the problem specifications and free from
deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of
synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant
and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences
associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary
variables is also given. The applicability of the techniques presented is discussed through various examples; their use for
verification purposes is illustrated as well. | Simulation of hepatological models: a study in visual interactive exploration of scientific problems In many different fields of science and technology, visual expressions formed by diagrams, sketches, plots and even images are traditionally used to communicate not only data but also procedures. When these visual expressions are systematically used within a scientific community, bi-dimensional notations often develop which allow the construction of complex messages from sets of primitive icons. This paper discusses how these notations can be translated into visual languages and organized into an interactive environment designed to improve the user's ability to explore scientific problems. To facilitate this translation, the use of Conditional Attributed Rewriting Systems has been extended to visual language definition. The case of a visual language in the programming of a simulation of populations of hepatic cells is studied. A discussion is given of how such a visual language allows the construction of programs through the combination of graphical symbols which are familiar to the physician or which schematize shapes familiar to him in that they resemble structures the observes in real experiments. It is also shown how such a visual approach allows the user to focus on the solution of his problems, avoiding any request for unnecessary precision and most requests for house-keeping data during the interaction. | Object-oriented modeling and design | Reasoning Algebraically about Loops We show here how to formalize different kinds of loop constructs within the refinement calculus, and how to use this formalization to derive general loop transformation rules. The emphasis is on using algebraic methods for reasoning about equivalence and refinement of loops, rather than looking at operational ways of reasoning about loops in terms of their execution sequences. We apply the algebraic reasoning techniques to derive a collection of different loop transformation rules that have been found important in practical program derivations: merging and reordering of loops, data refinement of loops with stuttering transitions and atomicity refinement of loops. | Separation and information hiding We investigate proof rules for information hiding, using the recent formalism of separation logic. In essence, we use the separating conjunction to partition the internal resources of a module from those accessed by the module's clients. The use of a logical connective gives rise to a form of dynamic partitioning, where we track the transfer of ownership of portions of heap storage between program components. It also enables us to enforce separation in the presence of mutable data structures with embedded addresses that may be aliased. | Joining specification statements The specification statement allows us to easily express what a program statement does. This paper shows how refinement of specification statements can be directly expressed using the predicate calculus. It also shows that the specification statements interpreted as predicate transformers form a complete lattice, and that this lattice is the lattice of conjunctive predicate transformers. The join operator of this lattice is constructed as a specification statement. The join operators of two interesting sublattices of the set of specification statements are also investigated. | Hex-splines: a novel spline family for hexagonal lattices This paper proposes a new family of bivariate, nonseparable splines, called hex-splines, especially designed for hexagonal lattices. The starting point of the construction is the indicator function of the Voronoi cell, which is used to define in a natural way the first-order hex-spline. Higher order hex-splines are obtained by successive convolutions. A mathematical analysis of this new bivariate spline family is presented. In particular, we derive a closed form for a hex-spline of arbitrary order. We also discuss important properties, such as their Fourier transform and the fact they form a Riesz basis. We also highlight the approximation order. For conventional rectangular lattices, hex-splines revert to classical separable tensor-product B-splines. Finally, some prototypical applications and experimental results demonstrate the usefulness of hex-splines for handling hexagonally sampled data. | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.2 | 0.003922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Processing Negation in NL Interfaces to Knowledge Bases This paper deals with Natural Language (NL) question-answering to knowledge bases (KB). It considers the usual conceptual graphs (CG) approach for NL semantic interpretation by joins of canonical graphs and compares it to the computational linguistics approach for NL question-answering basedon logical forms. After these theoretical considerations, the paper presents a system for querying a KB of CG in the domain of finances. It uses controlled English and processes large classes of negative questions. Internally the negation is interpreted as a replacement of the negatedt ype by its siblings from the type hierarchy. The answer is found by KB projection, generalized and presented in NL in a rather summarizedform, without a detaileden umeration of types. Thus the paper presents an interface for NL understanding and original techniques for application of CG operations (projection and generalization) as means for obtaining a more "natural" answer to the user's negative questions. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Probabilistic preemption control using frequency scaling for sporadic real-time tasks. | Optimal Priority Assignment Algorithms for Probabilistic Real-Time Systems. | A component-based framework for modeling and analyzing probabilistic real-time systems A challenging research issue of analyzing a real-time system is to model the tasks composing the system and the resource provided to the system. In this paper, we propose a probabilistic component-based model which abstracts in the interfaces both the functional and non-functional requirements of such systems. This approach allows designers to unify in the same framework probabilistic scheduling techniques and compositional guarantees that go from soft to hard real-time. We provide sufficient schedulability tests for task systems using such framework when the scheduler is either preemptive Fixed-Priority or Earliest Deadline First. | A framework for the response time analysis of fixed-priority tasks with stochastic inter-arrival times Real-time scheduling usually considers worst-case values for the parameters of task (or message stream) sets, in order to provide safe schedulability tests for hard real-time systems. However, worst-case conditions introduce a level of pessimism that is often inadequate for a certain class of (soft) real-time systems. In this paper we provide an approach for computing the stochastic response time of tasks where tasks have inter-arrival times described by discrete probabilistic distribution functions, instead of minimum inter-arrival (MIT) values. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming. | The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism. | A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section. | Modelling information flow for organisations: A review of approaches and future challenges. Modelling is a classic approach to understanding complex problems that can be achieved diagrammatically to visualise concepts, and mathematically to analyse attributes of concepts. An organisation as a communicating entity is a made up of constructs in which people can have access to information and speak to each other. Modelling information flow for organisations is a challenging task that enables analysts and managers to better understand how to: organise and coordinate processes, eliminate redundant information flows and processes, minimise the duplication of information and manage the sharing of intra- and inter-organisational information. | From Action Systems to Modular Systems Action systems are used to extend program refinement methods for sequential programs, as described in the refinement calculus, to parallel and reactive system refinement. They provide a general description of reactive systems, capable of modeling terminating, possibly aborting and infinitely repeating systems. We show how to extend the action system model to refinement of modular systems. A module may export and import variables, it may provide access procedures for other modules, and it may itself access procedures of other modules. Modules may have autonomous internal activity and may execute in parallel or in sequence. Modules may be nested within each other. They may communicate by shared variables, shared actions, a generalized form of remote procedure calls and by persistent data structures. Both synchronous and asynchronous communication between modules is supported. The paper shows how a single framework can be used for both the specification of large systems, the modular decomposition of the system into smaller units and the refinement of the modules into program modules that can be described in a standard programming language and executed on standard hardware. | A Software Development Environment for Improving Productivity First Page of the Article | The navigation toolkit The problem | Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.1 | 0.1 | 0.066667 | 0.0125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A visual framework for modelling with heterogeneous notations This paper presents a visual framework for organizing models of systems which allows a mixture of notations, diagrammatic or text-based, to be used. The framework is based on the use of templates which can be nested and sometimes flattened. It is modular and can be used to structure the constraint space of the system, making it scalable with appropriate tool support. It is also flexible and extensible: users can choose which notations to use, mix them and add new notations or templates. The goal of this work is to provide more intuitive and expressive languages and frameworks to support the construction and presentation of rich and precise models. | On Diagram Tokens and Types Rejecting the temptation to make up a list of necessary and sufficient conditions for diagrammatic and sentential systems, we present an important distinction which arises from sentential and diagrammatic features of systems. Importantly, the distinction we will explore in the paper lies at a meta-level. That is, we argue for a major difference in meta-theory between diagrammatic and sentential systems, by showing the necessity of a more fine-grained syntax for a diagrammatic system than for a sentential system. Unlike with sentential systems, a diagrammatic system requires two levels of syntax--token and type. Token-syntax is about particular diagrams instantiated on some physical medium, and type-syntax provides a formal definition with which a concrete representtation of a diagram must comply. While these two levels of syntax are closely related, the domains of type-syntax and token-syntax are distinct from each other. Euler diagrams are chosen as a case study to illustrate the following major points of the paper: (i) What kinds of diagrammatic features (as opposed to sentential features) require two different levels of syntax? (ii) What is the relation between these two levels of syntax? (iii) What is the advantage of having a two-tiered syntax? | Towards a Formalization of Constraint Diagrams Geared to complement UML and to the specification of large software systems by non-mathematicians, constraint diagrams are a visual language that generalizes the popular and intuitive Venn diagrams and Euler circles, and adds facilities for quantifying over elements and navigating relations. The language design emphasizes scalability and expressiveness while retaining intuitiveness. Spider diagrams form a subset of the notation, leaving out universal quantification and the ability to navigate relations. Spider diagrams have been given a formal definition. This paper extends that definition to encompass the constraint diagram notation. The formalization of constraint diagrams is nontrivial: it exposes subtleties concerned with the implicit ordering of symbols in the visual language, which were not evident before a formal definition of the language was attempted. This has led to an improved design of the language | Drawing graphs nicely using simulated annealing The paradigm of simulated annealing is applied to the problem of drawing graphs “nicely.” Our algorithm deals with general undirected graphs with straight-line edges, and employs several simple criteria for the aesthetic quality of the result. The algorithm is flexible, in that the relative weights of the criteria can be changed. For graphs of modest size it produces good results, competitive with those produced by other methods, notably, the “spring method” and its variants. | Straight-Line Drawing Algorithms for Hierarchical Graphs and Clustered Graphs Abstract. Hierarchical graphs and clustered graphs are useful non-classical graph models for structured relational information. Hierarchical graphs are graphs with layering structures; clustered graphs are graphs with recursive clustering structures. Both have applications in CASE tools, software visualization and VLSI design. Drawing algorithms for hierarchical graphs have been well investigated. However, the problem of planar straight-line representation has not been solved completely. In this paper we answer the question: does every planar hierarchical graph admit a planar straight-line hierarchical drawing? We present an algorithm that constructs such drawings in linear time. Also, we answer a basic question for clustered graphs, that is, does every planar clustered graph admit a planar straight-line drawing with clusters drawn as convex polygons? We provide a method for such drawings based on our algorithm for hierarchical graphs. Key Words. Computational geometry, Automatic graph drawing, Hierarchical graph, Clustered graph, | Algorithms for drawing graphs: an annotated bibliography Several data presentation problems involve drawing graphs so that they are easy to read and understand. Examples include circuit schematics and software engineering diagrams. In this paper we present a bibliographic survey on algorithms whose goal is to produce aesthetically pleasing drawings of graphs. Research on this topic is spread over the broad spectrum of Computer Science. This bibliography constitutes an attempt to encompass both theoretical and application oriented papers from disparate areas. | The Object Flow Model: A Formal Framework for Describing the Dynamic Construction, Destruction and Interaction of Complex Objects This research complements active object-oriented database management systems by providing a formal, yet conceptually-natural model for complex object construction and destruction. The Object Flow Model (OFM), introduced in this paper, assumes an object-oriented database for the rich structural description of objects and for the specification of methods to manipulate objects. The OFM contributes a third component, the Object Flow Diagram (OFD), which provides a visual formalism to describe how multiple objects and events can actively invoke processing steps, how objects can become part of progressively more complex objects, and how complex objects can be picked apart. The OFD thus provides an invocation mechanism that is more general than a single message and a processing mechanism that may invoke multiple methods (so long as they apply to either the input or output objects). The development of the OFD was influenced by conceptual modeling languages and discrete event simulation languages and the formal semantics of the OFD is based on work in deductive databases. | Specifications are (preferably) executable The validation of software specifications with respect to explicit and implicit user requirements is extremely difficult. To ease the validation task and to give users immediate feedback of the behavior of the future software it was suggested to make specifications executable. However, Hayes and Jones (Hayes, Jones 89) argue that executable specifications should be avoided because executability can restrict the expressiveness of specification languages, and can adversely affect implementations. In this paper I will argue for executable specifications by showing that non-executable formal specifications can be made executable on almost the same level of abstraction and without essentially changing their structure. No new algorithms have to be introduced to get executability. In many cases the combination of property-orientation and search results in specifications based on the generate-and-test approach. Furthermore, I will demonstrate that declarative specification languages allow to combine high expressiveness and executability. | On Overview of KRL, a Knowledge Representation Language | Appraising Fairness in Languages for Distributed Programming The relations among various languages and models for distributed computation and various possible definitions of fairness are considered. Natural semantic criteria are presented which an acceptable notion of fairness should satisfy. These are then used to demonstrate differences among the basic models, the added power of the fairness notion, and the sensitivity of the fairness notion to irrelevant semantic interleavings of independent operations. These results are used to show that from the considerable variety of commonly used possibilities, only strong process fairness is appropriate for CSP if these criteria are adopted. We also show that under these criteria, none of the commonly used notions of fairness are fully acceptable for a model with an n-way synchronization mechanism. The notion of fairness most often mentioned for Ada is shown to be fully acceptable. For a model with nonblocking send operations, some variants of common fairness definitions are appraised, and two are shown to satisfy the suggested criteria. | An Approach to the Design of Distributed Systems with B AMN In this paper, we describe an approach to the design of distributed systems with B AMN. The approach is based on the action-system formalism which provides a framework for developing state-based parallel reactive systems. More specifically, we use the so-called CSP approach to action systems in which interaction between subsystems is by synchronised message passing and there is no sharing of state. We show that the abstract machines of B may be regarded as action systems and show how reactive refinement and decomposition of action systems may be applied to abstract machines. The approach fits in closely with the stepwise refinement method of B. | Fuzzy logic as a basis for reusing task‐based specifications | LANSF: a protocol modelling environment and its implementation SUMMARY LANSF is a software package that was originally designed as a tool to investigate the behaviour of medium access control (MAC) level protocols. These protocols form an interesting class of distributed computations: timing of events is the key factor in them. The protocol definition language of LANSF is based on C, and protocols are specified (programmed) as collections of communicating, interrupt-driven processes. These specifications are executable: an event-driven emulator of MAC-level communication phenomena forms the foundation of the implementation. Some tools for debugging, testing, and validation of protocol specifications are provided. We present key features of LANSF at the syntactic level, comment informally on the semantics of these features, and highlight some implementation issues. A complete example of a LANSF application is discussed in the Appendix. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.212594 | 0.109319 | 0.039559 | 0.004987 | 0.002545 | 0.000679 | 0.000042 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Word Play: A History Of Voice Interaction In Digital Games The use of voice interaction in digital games has a long and varied history of experimentation but has never achieved sustained, widespread success. In this article, we review the history of voice interaction in digital games from a media archaeology perspective. Through detailed examination of publicly available information, we have identified and classified all games that feature some form of voice interaction and have received a public release. Our analysis shows that the use of voice interaction in digital games has followed a tidal pattern: rising and falling in seven distinct phases in response to new platforms and enabling technologies. We note characteristic differences in the way Japanese and Western game developers have used voice interaction to create different types of relationships between players and in-game characters. Finally, we discuss the implications for game design and scholarship in light of the increasing ubiquity of voice interaction systems. | IOT based wearable sensor for diseases prediction and symptom analysis in healthcare sector Humans with good health condition is some more difficult in today's life, because of changing food habit and environment. So we need awareness about the health condition to the survival. The health-support systems faces significant challenges like lack of adequate medical information, preventable errors, data threat, misdiagnosis, and delayed transmission. To overcome this problem, here we proposed wearable sensor which is connected to Internet of things (IoT) based big data i.e. data mining analysis in healthcare. Moreover, here we design Generalize approximate Reasoning base Intelligence Control (GARIC) with regression rules to gather the information about the patient from the IoT. Finally, Train the data to the Artificial intelligence (AI) with the use of deep learning mechanism Boltzmann belief network. Subsequently Regularization _ Genome wide association study (GWAS) is used to predict the diseases. Thus, if the people has affected by some diseases they will get warning by SMS, emails. Etc., after that they got some treatments and advisory from the doctors. | Language Teaching in 3D Virtual Worlds with Machinima: Reflecting on an Online Machinima Teacher Training Course AbstractThis article is based on findings arising from a large, two-year EU project entitled "Creating Machinima to Enhance Online Language Learning and Teaching" CAMELOT, which was the first to investigate the potential of machinima, a form of virtual filmmaking that uses screen captures to record activity in immersive 3D environments, for language teaching. The article examines interaction in two particular phases of the project: facilitator-novice teacher interaction in an online teacher training course which took place in Second Life and teachers' field-testing of machinima which arose from it. Examining qualitative data from interviews and screen recordings following two iterations of a 6-week online teacher training course which was designed to train novice teachers how to produce machinima and the evaluation of the field-testing, the article highlights the pitfalls teachers encountered and reinforces the argument that creating opportunities for pedagogical purposes in virtual worlds implies that teachers need to change their perspectives to take advantage of the affordances offered. | Resistance and Sexuality in Virtual Worlds: An LGBT Perspective Virtual worlds can provide a safe place for social movements of marginal and oppressed groups such as lesbian, gay, bisexual and transgender (LGBT). When the virtual safe places are under threat, the inhabitants of a virtual world register protests, which have critical implications for the real-world issues. The nature of emancipatory practices such as virtual protests in the digital realm research remains somewhat under-explored. Specifically, it remains to be seen how the oppressed communities such as LGBT take radical actions in virtual worlds in order to restore the imbalance of power. We conducted a 35-month netnographic study of an LGBT social movement in World of Warcraft. The lead researcher joined the LGBT social movement and data was captured through participant observations, discussion forums, and chat logs. Drawing on the critical theory of Michel Foucault, we present empirical evidence that illuminates emancipatory social movement practices in an online virtual world. The findings suggest that there are complex power relations in a virtual world and, when power balance is disrupted, LGBT players form complex ways to register protests, which invoke strategies to restore order in the virtual fields. | Investigating various application areas of three-dimensional virtual worlds for higher education. Three-dimensional virtual world (3DVW) have been adopted extensively in the education sector worldwide, and there has been remarkable growth in the application of these environments for distance learning. A wide variety of universities and educational organizations across the world have utilized this technology for their regular learning and teaching programs. The current study conducts a systematic review of the published studies relevant to the application of 3DVWs in higher education. A search of the literature was carried out in eight high-ranking scientific digital libraries. Following scrutiny according to inclusion and exclusion criteria, 165 papers out of 1402 publications were selected for review from a variety of disciplines over a 10-year time period. The systematic review process were summarised, a number of paper reviews were conducted and results in conjunction with applicability of 3DVWs in higher education were extracted. In this study, various application areas of 3DVWs in higher education were found and classified into 13 main categories. Additionally, implications for research and practice are presented to provide new directions for further research and practice in the field. | An efficient Swarm-Intelligence approach for task scheduling in cloud-based internet of things applications In our rapidly-growing big-data area, often the big sensory data from Internet of Things (IoT) cannot be sent directly to the far data-center in an efficient way because of the limitation in the network infrastructure. Fog computing, which has increasingly gained popularity for real-time applications, offers the utilization of local mini data-centers near the sensors to release the burden from the main data-center, and to exploit the full potential of cloud-based IoT. In this paper, a high-performance approach based on the Max–Min Ant System (MMAS), which is an efficient variation in the family of ant colony optimization algorithms, is proposed to tackle the static task-graph scheduling in homogeneous multiprocessor environments, the predominant technology used as mini-servers in fog computing. The main duty of the proposed approach is to properly manipulate the priority values of tasks so that the most optimal task-order can be achieved. Leveraging background knowledge of the problem, as heuristic values, has made the proposed approach very robust and efficient. Different random task-graphs with different shape parameters have been utilized to evaluate the proposed approach, and the results show its efficiency and superiority versus traditional counterparts from the performance perspective. | A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming. | On Formalism in Specifications A critique of a natural-language specification, followed by presentation of a mathematical alternative, demonstrates the weakness of natural language and the strength of formalism in requirements specifications. | Viewpoints: principles, problems and a practical approach to requirements engineering The paper includes a survey and discussion of viewpoint‐oriented approaches to requirements engineering and a presentation of new work in this area which has been designed with practical application in mind. We describe the benefits of viewpoint‐oriented requirements engineering and describe the strengths and weaknesses of a number of viewpoint‐oriented methods. We discuss the practical problems of introducing viewpoint‐oriented requirements engineering into industrial software engineering practice and why these have prevented the widespread use of existing approaches. We then introduce a new model of viewpoints called Preview. Preview viewpoints are flexible, generic entities which can be used in different ways and in different application domains. We describe the novel characteristics of the Preview viewpoints model and the associated processes of requirements discovery, analysis and negotiation. Finally, we discuss how well this approach addresses some outstanding problems in requirements engineering (RE) and the practical industrial problems of introducing new requirements engineering methods. | Ontologies for Enterprise Knowledge Management Ontologies are a key technology for enabling semantics-driven knowledge processing, and it is widely accepted that the next generation of knowledge management system will rely on conceptual models in the form of ontologies. Unfortunately, the development of real-world enterprise-wide ontology-based knowledge management systems is still in an early stage. The authors present an integrated enterprise knowledge management architecture developed within the Ontologging project dealing with several challenges related to applying ontologies in real-world environments. They focus on two important ontology management problemsýnamely, supporting multiple ontologies and managing ontology evolution. | Asynchronous system synthesis We propose a method for synthesising a set of components from a high-level specification of the intended behaviour of the target system. The designer proceeds via correctness-preserving transformation steps towards an implementable architecture of components which communicate asynchronously. The interface model of each component specifies the communication protocol used. At each step a pre-defined component is extracted and the correctness of the step is proved. This ensures the compatibility of the components. We use Action Systems as our formal approach to system design. The method is inspired by hardware-oriented approaches with their component libraries, but is more general. We also explore the possibility of using tool support to administer the derivation, as well as to assist in correctness proofs. Here we rely on the tools supporting the B Method, as this method is closely related to Action Systems and has good tool support. | Universal Sparse Modeling | Software size estimation of object-oriented systems The strengths and weaknesses of existing size estimation techniques are discussed. The nature of software size estimation is considered. The proposed method takes advantage of a characteristic of object-oriented systems, the natural correspondence between specification and implementation, in order to enable users to come up with better size estimates at early stages of the software development cycle. Through a statistical approach the method also provides a confidence interval for the derived size estimates. The relation between the presented software sizing model and project cost estimation is also considered. | A Probabilistic Calculus for Probabilistic Real-Time Systems Challenges within real-time research are mostly in terms of modeling and analyzing the complexity of actual real-time embedded systems. Probabilities are effective in both modeling and analyzing embedded systems by increasing the amount of information for the description of elements composing the system. Elements are tasks and applications that need resources, schedulers that execute tasks, and resource provisioning that satisfies the resource demand. In this work, we present a model that considers component-based real-time systems with component interfaces able to abstract both the functional and nonfunctional requirements of components and the system. Our model faces probabilities and probabilistic real-time systems unifying in the same framework probabilistic scheduling techniques and compositional guarantees varying from soft to hard real time. We provide an algebra to work with the probabilistic notation developed and form an analysis in terms of sufficient probabilistic schedulability conditions for task systems with either preemptive fixed-priority or earliest deadline first scheduling paradigms. | 1.1 | 0.1 | 0.1 | 0.1 | 0.05 | 0.016667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Deontic Relevant Logic in Knowledge-based Requirements Engineering Requirements engineering is inherently concerned with discovering and/or predicting purposes, goals, and objectives of software systems. To discover/predict, analyze, elicit, specify, and reason about various requirements of software systems, we need a right fundamental logic system to provide us with a logical validity criterion of reasoning as well as a formal representation and specification language. This short position paper briefly shows that deontic relevant logic is a hopeful candidate for the fundamental logic we need. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
SAPIN: a framework for the structural analysis of protein interaction networks. Protein interaction networks are widely used to depict the relationships between proteins. These networks often lack the information on physical binary interactions, and they do not inform whether there is incompatibility of structure between binding partners. Here, we introduce SAPIN, a framework dedicated to the structural analysis of protein interaction networks. SAPIN first identifies the protein parts that could be involved in the interaction and provides template structures. Next, SAPIN performs structural superimpositions to identify compatible and mutually exclusive interactions. Finally, the results are displayed using Cytoscape Web. | UniProt Knowledgebase: a hub of integrated protein data. The UniProt Knowledgebase (UniProtKB) acts as a central hub of protein knowledge by providing a unified view of protein sequence and functional information. Manual and automatic annotation procedures are used to add data directly to the database while extensive cross-referencing to more than 120 external databases provides access to additional relevant information in more specialized data collections. UniProtKB also integrates a range of data from other resources. All information is attributed to its original source, allowing users to trace the provenance of all data. The UniProt Consortium is committed to using and promoting common data exchange formats and technologies, and UniProtKB data is made freely available in a range of formats to facilitate integration with other databases. | A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Motivation: When running experiments that involve multiple high density oligonucleotide arrays, it is important to remove sources of variation between arrays of non-biological origin. Normalization is a process for reducing this variation. It is common to see non-linear relations between arrays and the standard normalization provided by Affymetrix does not perform well in these situations. Results: We present three methods of performing normalization at the probe intensity level. These methods are called complete data methods because they make use of data from all arrays in an experiment to form the normalizing relation. These algorithms are compared to two methods that make use of a baseline array: a one number scaling based algorithm and a method that uses a non-linear normalizing relation by comparing the variability and bias of an expression measure. Two publicly available datasets are used to carry out the comparisons. The simplest and quickest complete data method is found to perform favorably. Availablity: Software implementing all three of the complete data normalization methods is available as part of the R package Affy, which is a part of the Bioconductor project http://www.bioconductor.org. Contact: [email protected] Supplementary information: Additional figures may be found at http://www.stat.berkeley.edu/similar tobolstad/normalize/ index.html. | A comparison of background correction methods for two-colour microarrays Motivation: Microarray data must be background corrected to remove the effects of non-specific binding or spatial heterogeneity across the array, but this practice typically causes other problems such as negative corrected intensities and high variability of low intensity log-ratios. Different estimators of background, and various model-based processing methods, are compared in this study in search of the best option for differential expression analyses of small microarray experiments. Results: Using data where some independent truth in gene expression is known, eight different background correction alternatives are compared, in terms of precision and bias of the resulting gene expression measures, and in terms of their ability to detect differentially expressed genes as judged by two popular algorithms, SAM and limma eBayes. A new background processing method (normexp) is introduced which is based on a convolution model. The model-based correction methods are shown to be markedly superior to the usual practice of subtracting local background estimates. Methods which stabilize the variances of the log-ratios along the intensity range perform the best. The normexp+offset method is found to give the lowest false discovery rate overall, followed by morph and vsn. Like vsn, normexp is applicable to most types of two-colour microarray data. Availability: The background correction methods compared in this article are available in the R package limma (Smyth, 2005) from http://www.bioconductor.org. Contact: [email protected] Supplementary information: Supplementary data are available from http://bioinf.wehi.edu.au/resources/webReferences.html. | Inferred Biomolecular Interaction Server-A Web Server To Analyze And Predict Protein Interacting Partners And Binding Sites IBIS is the NCBI Inferred Biomolecular Interaction Server. This server organizes, analyzes and predicts interaction partners and locations of binding sites in proteins. IBIS provides annotations for different types of binding partners (protein, chemical, nucleic acid and peptides), and facilitates the mapping of a comprehensive biomolecular interaction network for a given protein query. IBIS reports interactions observed in experimentally determined structural complexes of a given protein, and at the same time IBIS infers binding sites/interacting partners by inspecting protein complexes formed by homologous proteins. Similar binding sites are clustered together based on their sequence and structure conservation. To emphasize biologically relevant binding sites, several algorithms are used for verification in terms of evolutionary conservation, biological importance of binding partners, size and stability of interfaces, as well as evidence from the published literature. IBIS is updated regularly and is freely accessible via http://www.ncbi.nlm.nih.gov/Structure/ibis/ibis.html. | Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients.
Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls. | Feedback stabilization of some event graph models The authors introduce several notions of stability for event graph models, timed or not. The stability is similar to the boundedness notion for Petri nets. The event graph models can be controlled by an output feedback which takes information from some observable transitions and can disable some controllable transitions. The controller itself is composed of an event graph. In this framework the authors solve the corresponding stabilization problems, i.e., they wonder if such a controller may prevent the explosion of the number of tokens | Automated consistency checking of requirements specifications This article describes a formal analysis technique, called consistency checking, for automatic detection of errors, such as type errors, nondeterminism, missing cases, and circular definitions, in requirements specifications. The technique is designed to analyze requirements specifications expressed in the SCR (Software Cost Reduction) tabular notation. As background, the SCR approach to specifying requirements is reviewed. To provide a formal semantics for the SCR notation and a foundation for consistency checking, a formal requirements model is introduced; the model represents a software system as a finite-state automation which produces externally visible outputs in response to changes in monitored environmental quantities. Results of two experiments are presented which evaluated the utility and scalability of our technique for consistency checking in real-world avionics application. The role of consistency checking during the requirements phase of software development is discussed. | Fuzzy identification of systems and its application to modeling and control | Database design with common sense business reasoning and learning Automated database design systems embody knowledge about the database design process. However, their lack of knowledge about the domains for which databases are being developed significantly limits their usefulness. A methodology for acquiring and using general world knowledge about business for database design has been developed and implemented in a system called the Common Sense Business Reasoner, which acquires facts about application domains and organizes them into a a hierarchical, context-dependent knowledge base. This knowledge is used to make intelligent suggestions to a user about the entities, attributes, and relationships to include in a database design. A distance function approach is employed for integrating specific facts, obtained from individual design sessions, into the knowledge base (learning) and for applying the knowledge to subsequent design problems (reasoning). | An Operational Approach to Requirements Specification for Embedded Systems The approach to requirements specification for embedded systems described in this paper is called "operational" because a requirements specification is an executable model of the proposed system interacting with its environment. The approach is embodied by the language PAISLey, which is motivated and defined herein. Embedded systems are characterized by asynchronous parallelism, even at the requirements level; PAISLey specifications are constructed by interacting processes so that this can be represented directly. Embedded systems are also characterized by urgent performance requirements, and PAISLey offers a formal, but intuitive, treatment of performance. | Refinement calculus, part I: sequential nondeterministic programs A lattice theoretic framework for the calculus of program refinement is presented. Specifications and program statements are combined into a single (infinitary) language of commands which permits miraculous, angelic and demonic statements to be used in the description of program behavior. The weakest precondition calculus is extended to cover this larger class of statements and a game-theoretic interpretation is given for these constructs. The language is complete, in the sense that every monotonic predicate transformer can be expressed in it. The usual program constructs can be defined as derived notions in this language. The notion of inverse statements is defined and its use in formalizing the notion of data refinement is shown. | Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine. | Automated assists to the behavioral modeling process The coding of behavioral models is a time consuming and error prone process. In this paper the authors describe automated assists to the behavioral modeling process which reduce the coding time and result in models which have a well defined structure making it easier to insure their accuracy. The approach uses a particular graphical representation for the model. An interactive tool then assists in converting the graphical representation to the behavioral HDL code. The authors discuss a pictorial representation for VHDL behavioral models. In VHDL an architectural body is used to define the behavior of a device. These architectural bodies are a set of concurrently running process. These processes are either process blocks or various forms of the signal assignment statements. One can give a pictorial representation to a behavioral architectural body by means of a process model graph (PMG) | PHRAN-SPAN: a natural language interface for system specifications | Conceptual representation of waveforms for temporal reasoning Addresses the problem of comparing and unifying temporal relationships between activities expressed in timing diagrams and natural language narrative (English). This problem often occurs in specifications expressing behavioral requirements and constraints. The approach followed is to translate both diagrams and text into a common knowledge representation (conceptual graphs) employing temporal relations developed for temporal interval logic. In this knowledge representation, the requirements may be integrated, checked for inconsistencies, and subjected to additional temporal reasoning. An algorithm of polynomial complexity for generating a compact representation of temporal relationships from timing diagrams is presented. Generation of comparable conceptual graphs from English statements is described by using examples. Integrating conceptual graphs from timing diagrams and sentences while checking for inconsistencies is also of polynomial complexity. | Implementing a semantic interpreter using conceptual graphs | Executing Conceptual Graphs This paper addresses the issue of directly executing conceptual graphs by developing an execution model that simulates interactions among behavioral concepts and with attributes related to object concepts. While several researchers have proposed various mechanisms for computing or simulating conceptual graphs, but these usually rely on extensions to conceptual graphs. The simulation algorithm described in this paper is inspired by digital logic simulators and reactive systems simulators. Behavior in conceptual graphs is described by action, event and state concept types along with all their subtypes. Activity in such concepts propagates over conceptual relations to invoke activity or changes in other behavioral concepts or to affect the attributes related to object type concepts. The challenging issues of orderly simulation of behavior recursively described by another graphs, and of combinational relations are also addressed. | Human-computer interface development: concepts and systems for its management Human-computer interface management, from a computer science viewpoint, focuses on the process of developing quality human-computer interfaces, including their representation, design, implementation, execution, evaluation, and maintenance. This survey presents important concepts of interface management: dialogue independence, structural modeling, representation, interactive tools, rapid prototyping, development methodologies, and control structures. Dialogue independence is the keystone concept upon which all the other concepts depend. It is a characteristic that separates design of the interface from design of the computational component of an application system so that modifications in either tend not to cause changes in the other. The role of a dialogue developer, whose main purpose is to create quality interfaces, is a direct result of the dialogue independence concept. Structural models of the human-computer interface serve as frameworks for understanding the elements of interfaces and for guiding the dialogue developer in their construction. Representation of the human-computer interface is accomplished by a variety of notational schemes for describing the interface. Numerous kinds of interactive tools for human-computer interface development free the dialogue developer from much of the tedium of "coding" dialogue. The early ability to observe behavior of the interface—and indeed that of the whole application system—provided by rapid prototyping increases communication among system designers, implementers, evaluators, and end-users. Methodologies for interactive system development consider interface management to be an integral part of the overall development process and give emphasis to evaluation in the development life cycle. Finally, several types of control structures govern how sequencing among dialogue and computational components is designed and executed. Numerous systems for human-computer interface management are presented to illustrate these concepts. | Synergy: A Conceptual Graph Activation-Based Language This paper presents the core of Synergy; an implemented visual multi-paradigm programming language based on executable Conceptual Graph (CG). Execution is based on a CG-activation mechanism for which concept lifecycle, relation propagation rules and referent instantiation constitute the key elements, hi this paper we defme the activation mechanism and the CG structure (concept, relation, context, co-reference) used in Synergy as well as the concept type definition, the encapsulation mechanism and the knowledge base of Synergy. Examples are given to illustrate some aspects of the language. Hybrid object-oriented and concurrent object-oriented use of Synergy are presented in other papers [9, 10]. | Abstraction of objects by conceptual clustering Very bound to the logic of first-rate predicates, the formalism of conceptual graphs constitutes a knowledge representation language. The abstraction of systems presents several advantages. It helps to render complex systems more understandable, thus facilitating their analysis and their conception. Our approach of conceptual graphs abstraction, or conceptual clustering, is based on rectangular decomposition. It produces a set of clusters representing similarities between subsets of objects to be abstracted, organized into a hierarchy of classes: the Knowledge Space. Some conceptual clustering methods already exist. Our approach is distinguishable from other approaches in as far as it allows a gain in space and time. | Information system design methodology. | Managing Conflicts in Goal-Driven Requirements Engineering A wide range of inconsistencies can arise during requirements engineering as goals and requirements are elicited from multiple stakeholders. Resolving such inconsistencies sooner or later in the process is a necessary condition for successful development of the software implementing those requirements. The paper first reviews the main types of inconsistency that can arise during requirements elaboration, defining them in an integrated framework and exploring their interrelationships. It then concentrates on the specific case of conflicting formulations of goals and requirements among different stakeholder viewpoints or within a single viewpoint. A frequent, weaker form of conflict called divergence is introduced and studied in depth. Formal techniques and heuristics are proposed for detecting conflicts and divergences from specifications of goals/ requirements and of domain properties. Various techniques are then discussed for resolving conflicts and divergences systematically by introduction of new goals or by transformation of specifications of goals/objects toward conflict-free versions. Numerous examples are given throughout the paper to illustrate the practical relevance of the concepts and techniques presented. The latter are discussed in the framework of the KAOS methodology for goal-driven requirements engineering. | Understanding the requirements for developing open source software systems This study presents an initial set of findings from an empirical study of social processes, technical system configurations, organizational contexts, and interrelationships that give rise to open software. The focus is directed at understanding the requirements for open software development efforts, and how the development of these requirements differs from those traditional to software engineering and requirements engineering. Four open software development communities are described, examined, and compared to help discover what these differences may be. Eight kinds of software informalisms are found to play a critical role in the elicitation, analysis, specification, validation, and management of requirements for developing open software systems. Subsequently, understanding the roles these software informalisms take in a new formulation of the requirements development process for open source software is the focus of this study. This focus enables considering a reformulation of the requirements engineering process and its associated artifacts or (in)formalisms to better account for the requirements for developing open source software systems. | Fuzzy Time Series Forecasting With a Probabilistic Smoothing Hidden Markov Model Since its emergence, the study of fuzzy time series (FTS) has attracted more attention because of its ability to deal with the uncertainty and vagueness that are often inherent in real-world data resulting from inaccuracies in measurements, incomplete sets of observations, or difficulties in obtaining measurements under uncertain circumstances. The representation of fuzzy relations that are obtained from a fuzzy time series plays a key role in forecasting. Most of the works in the literature use the rule-based representation, which tends to encounter the problem of rule redundancy. A remedial forecasting model was recently proposed in which the relations were established based on the hidden Markov model (HMM). However, its forecasting performance generally deteriorates when encountering more zero probabilities owing to fewer fuzzy relationships that exist in the historical temporal data. This paper thus proposes an enhanced HMM-based forecasting model by developing a novel fuzzy smoothing method to overcome performance deterioration. To deal with uncertainty more appropriately, the roulette-wheel selection approach is applied to probabilistically determine the forecasting result. The effectiveness of the proposed model is validated through real-world forecasting experiments, and performance comparison with other benchmarks is conducted by a Monte Carlo method. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.202392 | 0.202392 | 0.202392 | 0.101196 | 0.022923 | 0.000679 | 0.000056 | 0.00001 | 0.000003 | 0.000001 | 0 | 0 | 0 | 0 |
Rotation, scale and translation invariant spread spectrum digital image watermarking A digital watermark is an invisible mark embedded in a digital image which may be used for a number of different purposes including image captioning and copyright protection. This paper describes how a combination of spread spectrum encoding of the embedded message and transform-based invariants can be used for digital image watermarking. In particular, it is described how a Fourier-Mellin-based approach can be used to construct watermarks which are designed to be unaffected by any combination of rotation and scale transformations. In addition, a novel method of CDMA spread spectrum encoding is introduced which allows one to embed watermark messages of arbitrary length and which need only a secret key for decoding. The paper also describes the usefulness of Reed Solomon error-correcting codes in this scheme. (C) 1998 Elsevier Science B.V. All rights reserved. | Towards second generation watermarking schemes The digital watermarking schemes of today use pixels (samples in the case of audio), frequency or other transform coefficients to embed the information. The drawback of such schemes is that the watermark is not embedded in the perceptually significant portions of the data. We refer to such techniques as first generation watermarking schemes. In this paper we introduce the concept of second generation watermarking schemes which, unlike first generation watermarking schemes, employ the notion of data features. We propose a scheme based on point features in images using a scale interaction technique based on 2D continuous wavelets. The features are used to compute a Voronoi partition of the image. The watermark is embedded in each segment using spread spectrum watermarking. In the recovery process the same features are detected, and again used to partition the image. Then the watermark is extracted from each segment separately | Geometrically invariant watermarking using feature points This paper presents a new approach for watermarking of digital images providing robustness to geometrical distortions. The weaknesses of classical watermarking methods to geometrical distortions are outlined first. Geometrical distortions can be decomposed into two classes: global transformations such as rotations and translations and local transformations such as the StirMark attack. An overview of existing self-synchronizing schemes is then presented. Theses schemes can use periodical properties of the mark, invariant properties of transforms, template insertion, or information provided by the original image to counter geometrical distortions. Thereafter, a new class of watermarking schemes using the image content is presented. We propose an embedding and detection scheme where the mark is bound with a content descriptor defined by salient points. Three different types of feature points are studied and their robustness to geometrical transformations is evaluated to develop an enhanced detector. The embedding of the signature is done by extracting feature points of the image and performing a Delaunay tessellation on the set of points. The mark is embedded using a classical additive scheme inside each triangle of the tessellation. The detection is done using correlation properties on the different triangles. The performance of the presented scheme is evaluated after JPEG compression, geometrical attack and transformations. Results show that the fact that the scheme is robust to these different manipulations. Finally, in our concluding remarks, we analyze the different perspectives of such content-based watermarking scheme. | Scale & Affine Invariant Interest Point Detectors In this paper we propose a novel approach for detecting interest points invariant to scale and affine transformations. Our scale and affine invariant detectors are based on the following recent results: (1) Interest points extracted with the Harris detector can be adapted to affine transformations and give repeatable results (geometrically stable). (2) The characteristic scale of a local structure is indicated by a local extremum over scale of normalized derivatives (the Laplacian). (3) The affine shape of a point neighborhood is estimated based on the second moment matrix.Our scale invariant detector computes a multi-scale representation for the Harris interest point detector and then selects points at which a local measure (the Laplacian) is maximal over scales. This provides a set of distinctive points which are invariant to scale, rotation and translation as well as robust to illumination changes and limited changes of viewpoint. The characteristic scale determines a scale invariant region for each point. We extend the scale invariant detector to affine invariance by estimating the affine shape of a point neighborhood. An iterative algorithm modifies location, scale and neighborhood of each point and converges to affine invariant points. This method can deal with significant affine transformations including large scale changes. The characteristic scale and the affine shape of neighborhood determine an affine invariant region for each point.We present a comparative evaluation of different detectors and show that our approach provides better results than existing methods. The performance of our detector is also confirmed by excellent matching results; the image is described by a set of scale/affine invariant descriptors computed on the regions associated with our points. | Seam carving for content-aware image resizing Effective resizing of images should not only use geometric constraints, but consider the image content as well. We present a simple image operator called seam carving that supports content-aware image resizing for both reduction and expansion. A seam is an optimal 8-connected path of pixels on a single image from top to bottom, or left to right, where optimality is defined by an image energy function. By repeatedly carving out or inserting seams in one direction we can change the aspect ratio of an image. By applying these operators in both directions we can retarget the image to a new size. The selection and order of seams protect the content of the image, as defined by the energy function. Seam carving can also be used for image content enhancement and object removal. We support various visual saliency measures for defining the energy of an image, and can also include user input to guide the process. By storing the order of seams in an image we create multi-size images, that are able to continuously change in real time to fit a given size. | Robust video watermarking based on affine invariant regions in the compressed domain This paper proposes a novel robust video watermarking scheme based on local affine invariant features in the compressed domain. This scheme is resilient to geometric distortions and quite suitable for DCT-encoded compressed video data because it performs directly in the block DCTs domain. In order to synchronize the watermark, we use local invariant feature points obtained through the Harris-Affine detector which is invariant to affine distortions. To decode the frames from DCT domain to the spatial domain as fast as possible, a fast inter-transformation between block DCTs and sub-block DCTs is employed and down-sampling frames in the spatial domain are obtained by replacing each sub-blocks DCT of 2x2 pixels with half of the corresponding DC coefficient. The above-mentioned strategy can significantly save computational cost in comparison with the conventional method which accomplishes the same task via inverse DCT (IDCT). The watermark detection is performed in spatial domain along with the decoded video playing. So it is not sensitive to the video format conversion. Experimental results demonstrate that the proposed scheme is transparent and robust to signal-processing attacks, geometric distortions including rotation, scaling, aspect ratio changes, linear geometric transforms, cropping and combinations of several attacks, frame dropping, and frame rate conversion. | Genetic algorithm based methodology for breaking the steganalytic systems. Steganalytic techniques are used to detect whether an image contains a hidden message. By analyzing various image features between stego-images (the images containing hidden messages) and cover-images (the images containing no hidden messages), a steganalytic system is able to detect stego-images. In this paper, we present a new concept of developing a robust steganographic system by artificially counterfeiting statistic features instead of the traditional strategy by avoiding the change of statistic features. We apply genetic algorithm based methodology by adjusting gray values of a cover-image while creating the desired statistic features to generate the stego-images that can break the inspection of steganalytic systems. Experimental results show that our algorithm can not only pass the detection of current steganalytic systems, but also increase the capacity of the embedded message and enhance the peak signal-to-noise ratio of stego-images. | Image authentication algorithm with recovery capabilities based on neural networks in the DCT domain In this study, the authors propose an image authentication algorithm in the DCT domain based on neural networks. The watermark is constructed from the image to be watermarked. It consists of the average value of each 8 × 8 block of the image. Each average value of a block is inserted in another supporting block sufficiently distant from the protected block to prevent simultaneous deterioration of the image and the recovery data during local image tampering. Embedding is performed in the middle frequency coefficients of the DCT transform. In addition, a neural network is trained and used later to recover tampered regions of the image. Experimental results shows that the proposed method is robust to JPEG compression and can also not only localise alterations but also recover them. | Maris: map recognition input system A map recognition input system called MARIS is developed to digitize large-reduced-scale maps into a layered data form. This paper presents an experimental workstation, a vector-based recognition method, and an intelligent interaction function which are devised in order to enhance input speed. The recognition method is capable of extracting building lines, contour lines, and lines representing railways, roads and water areas. The recognition and the interaction utilize new efficient line tracing/tracking techniques. Experimental results show that the input time using MARIS can be reduced to about 25% of that of a system using a conventional interactive digitizer. | Abel lemma-based finite-sum inequality and its application to stability analysis for linear discrete time-delay systems This paper is concerned with stability of linear discrete time-delay systems. Note that a tighter estimation on a finite-sum term appearing in the forward difference of some Lyapunov functional leads to a less conservative delay-dependent stability criterion. By using Abel lemma, a novel finite-sum inequality is established, which can provide a tighter estimation than the ones in the literature for the finite-sum term. Applying this Abel lemma-based finite-sum inequality, a stability criterion for linear discrete time-delay systems is derived. It is shown through numerical examples that the stability criterion can provide a larger admissible maximum upper bound than stability criteria using a Jensen-type inequality approach and a free-weighting matrix approach. | Appraising Fairness in Languages for Distributed Programming The relations among various languages and models for distributed computation and various possible definitions of fairness are considered. Natural semantic criteria are presented which an acceptable notion of fairness should satisfy. These are then used to demonstrate differences among the basic models, the added power of the fairness notion, and the sensitivity of the fairness notion to irrelevant semantic interleavings of independent operations. These results are used to show that from the considerable variety of commonly used possibilities, only strong process fairness is appropriate for CSP if these criteria are adopted. We also show that under these criteria, none of the commonly used notions of fairness are fully acceptable for a model with an n-way synchronization mechanism. The notion of fairness most often mentioned for Ada is shown to be fully acceptable. For a model with nonblocking send operations, some variants of common fairness definitions are appraised, and two are shown to satisfy the suggested criteria. | The weakest precondition calculus: Recursion and duality An extension of Dijkstra's guarded command language is studied, including unbounded demonic choice and a backtrack operator. We consider three orderings on this language: a refinement ordering defined by Back, a new deadlock ordering, and an approximation ordering of Nelson. The deadlock ordering is in between the two other orderings. All operators are monotonic in Nelson's ordering, but backtracking is not monotonic in Back's ordering and sequential composition is not monotonic for the deadlock ordering. At first sight recursion can only be added using Nelson's ordering. We show that, under certain circumstances, least fixed points for non-monotonic functions can be obtained by iteration from the least element. This permits the addition of recursion even using Back's ordering or the deadlock ordering in a fully compositional way. In order to give a semantic characterization of the three orderings that relates initial states to possible outcomes of the computation, the relation between predicate transformers and discrete power domains is studied. We consider (two versions of) the Smyth power domain and the Egli-Milner power domain. | Elastic Hierarchies: Combining Treemaps and Node-Link Diagrams We investigate the use of elastic hierarchies for representing trees, where a single graphical depiction uses a hybrid mixture, or "interleaving", of more basic forms at different nodes of the tree. In particular, we explore combinations of node-link and Treemap forms, to combine the space-efficiency of Treemaps with the structural clarity of node-link diagrams. A taxonomy is developed to characterize the design space of such hybrid combinations. A software prototype is described, which we used to explore various techniques for visualizing, browsing and interacting with elastic hierarchies, such as side-by-side overview and detail views, highlighting and rubber banding across views, visualization of multiple foci, and smooth animations across transitions. The paper concludes with a discussion of the characteristics of elastic hierarchies and suggestions for research on their properties and uses. | Memory dissipative control for singular T-S fuzzy time-varying delay systems under actuator saturation. This paper considers the problem of memory dissipative control for singular T–S fuzzy time-varying delay systems under actuator saturation. A delay-central-point (DCP) method is presented to develop less conservative delay-dependent conditions. And the memory state feedback controller design problem can be solved by the linear matrices inequalities (LMIs) such that the closed-loop system is not only admissible, but also strictly (Q,V,R)- α-dissipative. Then, the estimation of the largest domain of attraction for the system is formulated and solved as a LMI optimization problem. Finally some simulations are provided to demonstrate the effectiveness and superiority of the proposed method. | 1.017714 | 0.020025 | 0.017084 | 0.014015 | 0.01177 | 0.010081 | 0.00508 | 0.000062 | 0.000003 | 0 | 0 | 0 | 0 | 0 |
Development of Low-Complexity Video Watermarking With Conjugate Symmetric Sequency-Complex Hadamard Transform. In this letter, a blind video watermarking in phase domain is proposed with conjugate symmetric sequency-complex hadamard transform (CS-SCHT). The watermark is made imperceptible and robust using methods, such as low-amplitude block selection and amplitude boost. Simulations are conducted to perform objective and subjective evaluation on watermark imperceptibility and robustness against attacks, such as high efficiency video coding compression, rescaling, and cropping. Performance metrics, such as peak signal to noise ratio (PSNR), mean opinion score (MOS), and bit error rate (BER), are used, and a comparison is carried out with discrete Fourier transform (DFT). It has been shown that CS-SCHT offers comparable performance with DFT in terms of PSNR, MOS, and BER. As this transform require only few multiplications for computation of the transform kernel, this scheme has 37.5% significant saving on hardware for high definition videos compared with DFT. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Comparison of four design methods for real-time software development Four design methods that are of current interest in real-time software development are compared. The comparison presents the relative strengths and weaknesses of each method, with additional information on graphic notation and the recommended sequence of steps involved in the use of each method. The methods selected for comparison are Structured Design for Real-Time Systems, object-oriented design, PAMELA (Process Abstraction Method for Embedded Large Applications), and SCR (Software Cost Reduction project from the Naval Research Laboratory). | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Refining Action Systems within B-Tool . Action systems is a formalism designed for the constructionof parallel and distributed systems in a stepwise manner within the refinementcalculus. In this paper we show how action systems can be derivedand refined within a mechanical proof tool, the B-Tool. We describe howaction systems are embedded in B-Tool. Due to this embedding we cannow develop parallel and distributed systems within the B-Tool. We alsoshow how a typical and nontrivial refinement rule, the superposition refinement... | Simulation Machines for Checking Action System Refinements Action systems provide a formal approach to modelling parallel and reactive systems. They have a well established theory of refinement supported by simulation-based proof rules. This paper introduces an automatic approach for verifying action system refinements utilising standard CTL model checking. To do this, we encode each of the simulation conditions as a simulation machine, a Kripke structure on which the proof obligation can be discharged by checking that an associated CTL property holds. This procedure transforms each simulation condition into a model checking problem. Each simulation condition can then be model checked in isolation, or, if desired, together with the other simulation conditions by combining the simulation machines and the CTL properties. | Non-interference through determinism The standard approach to the specification of a secure system is to present a (usually state-based) abstract security model separately from the specification of the system's functional requirements, and establishing a correspondence between the two specifications. This complex treatment has resulted in development methods distinct from those usually advocated for general applications.We provide a novel and intellectually satisfying formulation of security properties in a process algebraic framework, and show that these are preserved under refinement. We relate the results to a more familiar state-based (Z) specification methodology. There are efficient algorithms for verifying our security properties using model checking. | Simulations Between Specifications of Distributed Systems In the stepwise development of a distributed system, the problem arises of verifying that a specification at a lower level of abstraction correctly implements a specification at a higher level of abstraction. Forward and backward simulation have been proposed as verification techniques for this problem. In this paper, we study forward and backward simulation in a framework where specifications are given as labeled transition systems with fairness requirements. We aim at clarifying the connection between simulations and the auxiliary variable constructions of Abadi and Lamport. In the paper, we also relax the earlier restriction that backward simulations be finitary. For a simple specification notation, similar to the action system formalism or Unity, we furthermore present proof rules that correspond to forward and backward simulations. Finally, we relate the forward and backward simulation techniques to subset-constructions that can be used in automata theory, e.g. for deciding language containment. | Refinement of State-Based Concurrent Systems The traces, failures, and divergences of CSP can be expressed as weakest precondition formulæ over action systems. We show how such systems may be refined up to failures-divergences, by giving two proof methods which are sound and jointly complete: forwards and backwards simulations. The technical advantage of our weakest precondition approach over the usual relational approach is in our simple handling of divergence; the practical advantage is in the fact that the refinement calculus for sequential programs may be used to calculate forwards simulations. Our methods may be adapted to state-based development methods such as VDM or Z. | A Single Complete Rule for Data Refinement One module is said to be refined by a second if no program using the second module can detect that it is not using the first; in that case the second module can replace the first in any program. Data refinement transforms the interior pieces of a module — its state and consequentially its operations — in order to refine the module overall. | Reasoning about Action Systems using the B-Method The action system formalism has been succesfully used whenconstructing parallel and distributed systems in a stepwise mannerwithin the refinement calculus. Usually the derivation is carried outmanually. In order to be able to produce more trustworthy software,some mechanical tool is needed. In this paper we show how actionsystems can be derived and refined within the B-Toolkit, which is amechanical tool supporting a software development method, theB-Method. We describe how action systems are embedded in theB-Method. Furthermore, we show how a typical and nontrivialrefinement rule, the superposition refinement rule, is formalized andapplied on action systems within the B-Method. In addition toproviding tool support for action system refinement we also extendthe application area of the B-Method to cover parallel anddistributed systems. A derivation towards a distributed loadbalancing algorithm is given as a case study. | Fairness and hyperfairness in multi-party interactions In this paper, a new fairness notion is proposed for languages with multi-party interactions as the sole interprocess synchronization and communication primitive. The main advantage of this fairness notion is the elimination of starvation occurring solely due to race conditions (i.e., ordering of independent actions). Also, this is the first fairness notion for such languages which is fully-adequate with respect to the criteria presented in [AFK88]. The paper defines the notion, proves its properties, and presents examples of its usefulness. | Toward objective, systematic design-method comparisons Software design methodologies (SDMs) suggest ways to improve productivity and quality. They are collections of complementary design methods and rules for applying them. A base framework and modeling formalism to help designers compare SDMs and define what design issues different SDMs address, which of their components address similar design issues, and ways to integrate the best characteristics of each to make a cleaner, more comprehensive and flexible SDM are presented. The use of formalism and framework and the evaluation of objectivity and completeness using the type and function frameworks are described.<> | Real-time specification and modeling with joint actions The notion of joint actions provides a natural execution model for a specification language, when temporal logic of actions is used for formal reasoning. We extend this basis with scheduling, the role of which is to enforce liveness properties and to introduce real-time properties. This is done in a way that agrees with the partial-order view of computations and can be applied already in the early stages of specification and design. This leads to distinguishing between schedulings that are totally correct, partially correct, or incorrect with respect to liveness properties. A general scheduling policy of durational actions is formulated from which any reasonable scheduling can be obtained by reducing its nondeterminism. When this policy is totally correct for a system and gives the required real-time properties, no special limitations are imposed on the implementation. The approach also leads to a general classification of real-time models according to the permitted interactions between the computational state and real time. | A Look at Japan's Development of Software Engineering Technology First Page of the Article | Combining belief networks and neural networks for scene segmentation We are concerned with the problem of image segmentation, in which each pixel is assigned to one of a predefined finite number of labels. In Bayesian image analysis, this requires fusing together local predictions for the class labels with a prior model of label images. Following the work of Bouman and Shapiro (1994), we consider the use of tree-structured belief networks (TSBNs) as prior models. The parameters in the TSBN are trained using a maximum-likelihood objective function with the EM algorithm and the resulting model is evaluated by calculating how efficiently it codes label images. A number of authors have used Gaussian mixture models to connect the label field to the image data. We compare this approach to the scaled-likelihood method of Smyth (1994) and Morgan and Bourlard (1995), where local predictions of pixel classification from neural networks are fused with the TSBN prior. Our results show a higher performance is obtained with the neural networks. We evaluate the classification results obtained and emphasize not only the maximum a posteriori segmentation, but also the uncertainty, as evidenced e.g., by the pixelwise posterior marginal entropies. We also investigate the use of conditional maximum-likelihood training for the TSBN and find that this gives rise to improved classification performance over the ML-trained TSBN | The algebra of multirelations. Multirelational semantics are well suited to reasoning about programs involving two kinds of non-determinism. This paper lays the categorical foundations for an algebraic calculus of multirelations. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.026934 | 0.018182 | 0.012121 | 0.009155 | 0.004093 | 0.001839 | 0.000885 | 0.000097 | 0.000019 | 0 | 0 | 0 | 0 | 0 |
Specification and Refinement of Access Control We consider the extension of fair event system specifications by concepts of access control ( prohibitions, user rights, and obligations). We give proof rules for verifying that an access control policy is correctly implemented in a system, and consider preservation of access control by refinement of event systems. Prohibitions and obligations are expressed as properties of traces and are preserved by standard refinement notions of event systems. Preservation of user rights is not guaranteed by construction; we propose to combine implementation-level user rights and obligations to implement high-level user rights. | B#: toward a synthesis between Z and B In this paper, I present some ideas and principles underlying the realization of a new project called B#. This project follows the main ideas and principles already at work in B, but it also follows a number of older concepts developed in Z. In B#, the intent is to have a formal system to be used to model complex system in general, not only software systems. | Refinement of Fair Action Systems An action system is a framework for describing parallel or distributedsystems, for which the refinement calculus offers a formalisation of the stepwisedevelopment method. Fairness is an important notion in modelling parallel ordistributed systems, and this paper investigates a calculus for refinement of fairaction systems. Simulations, which are proof techniques for refinement, are extendedto verify fair action systems. Our work differs from others" in that theadditional condition... | Introducing Dynamic Constraints in B In B, the expression of dynamic constraints is notoriously missing. In this paper, we make various proposals for introducing them.
They all express, in different complementary ways, how a system is allowed to evolve. Such descriptions are independent of
the proposed evolutions of the system, which are defined, as usual, by means of a number of operations. Some proof obligations
are thus proposed in order to reconcile the two points of view. We have been very careful to ensure that these proposals are
compatible with refinement. They are illustrated by several little examples, and a larger one. In a series of small appendices,
we also give some theoretical foundations to our approach. In writing this paper, we have been heavily influenced by the pioneering
works of Z. Manna and A. Pnueli [11], L. Lamport [10], R. Back [5] and M. Butler [6]. | Integrated Formal Methods, Third International Conference, IFM 2002, Turku, Finland, May 15-18, 2002, Proceedings | Proof rules and transformations dealing with fairness We provide proof rules enabling the treatment of two fairness assumptions in the context of Dijkstra's do-od-programs. These proof rules are derived by considering a transformed version of the original program which uses random assignments z ≔? and admits only fair computations. Various, increasingly complicated, examples are discussed. In all cases reasonably simple proofs can be given. The proof rules use well-founded structures corresponding to infinite ordinals and deal with the original programs and not their translated versions. | Unifying wp and wlp Boolean-valued predicates over a state space are isomorphic to its char- acteristic functions into {0,1}. Enlarging that range to { 1,0,1} allows the definition of extended predicates whose associated transformers gen- eralise the conventional wp and wlp. The correspondingly extended healthiness conditions include the new 'sub-additivity', an arithmetic inequality over predicates. Keywords: Formal semantics, program correctness, weakest precon- dition, weakest liberal precondition, Egli-Milner order. | Stepwise Removal of Virtual Channels in Distributed Algorithms A stepwise refinement method for the design of correct distributed algorithms is studied. The method frees the program designer from all the details of the target architecture of the system in early stages of the design process. The method is applied to a new aspect in the construction of distributed systems, the removal of virtual channels. We exemplify the design method by deriving a distributed algorithm. We show that the performed refinements preserve the correctness of the algorithm. | Higher Order Software A Methodology for Defining Software The key to software reliability is to design, develop, and manage software with a formalized methodology which can be used by computer scientists and applications engineers to describe and communicate interfaces between systems. These interfaces include: software to software; software to other systems; software to management; as well as discipline to discipline within the complete software development process. The formal methodology of Higher Order Software (HOS), specifically aimed toward large-scale multiprogrammed/multiprocessor systems, is dedicated to systems reliability. With six axioms as the basis, a given system and all of its interfaces is defined as if it were one complete and consistent computable system. Some of the derived theorems provide for: reconfiguration of real-time multiprogrammed processes, communication between functions, and prevention of data and timing conflicts. | The Draco Approach to Constructing Software from Reusable Components This paper discusses an approach called Draco to the construction of software systems from reusable software parts. In particular we are concerned with the reuse of analysis and design information in addition to programming language code. The goal of the work on Draco has been to increase the productivity of software specialists in the construction of similar systems. The particular approach we have taken is to organize reusable software components by problem area or domain. Statements of programs in these specialized domains are then optimized by source-to-source program transformations and refined into other domains. The problems of maintaining the representational consistency of the developing program and producing efficient practical programs are discussed. Some examples from a prototype system are also given. | Geometrically invariant watermarking using feature points This paper presents a new approach for watermarking of digital images providing robustness to geometrical distortions. The weaknesses of classical watermarking methods to geometrical distortions are outlined first. Geometrical distortions can be decomposed into two classes: global transformations such as rotations and translations and local transformations such as the StirMark attack. An overview of existing self-synchronizing schemes is then presented. Theses schemes can use periodical properties of the mark, invariant properties of transforms, template insertion, or information provided by the original image to counter geometrical distortions. Thereafter, a new class of watermarking schemes using the image content is presented. We propose an embedding and detection scheme where the mark is bound with a content descriptor defined by salient points. Three different types of feature points are studied and their robustness to geometrical transformations is evaluated to develop an enhanced detector. The embedding of the signature is done by extracting feature points of the image and performing a Delaunay tessellation on the set of points. The mark is embedded using a classical additive scheme inside each triangle of the tessellation. The detection is done using correlation properties on the different triangles. The performance of the presented scheme is evaluated after JPEG compression, geometrical attack and transformations. Results show that the fact that the scheme is robust to these different manipulations. Finally, in our concluding remarks, we analyze the different perspectives of such content-based watermarking scheme. | Sharp Retrenchment, Modulated Refinement and Simulation. Sharp retrenchment is introduced and briefly justified informally, as a liberalisation of refinement. In sharp retrenchment the relationship between an abstract operation and its concrete counterpart is mediated by extra predicates, allowing most particularly the description of non- refinement-like properties, and the mixing of I/O and state aspects in the passage between levels of abstraction. Sharp retrenchments are briefly contrasted with unsharp ones. Sharp retrenchments are shown to have a natural law of composition, and the way in which refinements may be viewed as sharp retrenchments is discussed. Modulated refinement is introduced as a version of refinement allowing mixing of I/O and state aspects, in order to facilitate comparison between sharp retrenchment and refinement, and various notions of simulation are considered in this context, specifically: stepwise simulation, the ability of simulator to mimic a sequence of execution steps of the simulatee; strong simulation, in which states and step labels are mapped independently between simulatee and simulator; and the refinement notion itself. Special cases of sharp retrenchment are shown to possess various subsets of these simulation properties, and the extent to which sharp retrenchments contain refinements within them is addressed. The details of the theory are worked out for the B-Method, though the applicability of the | A Survey on the Flexibility Requirements Related to Business Processes and Modeling Artifacts In competitive and evolving environments only organizations which can manage complexity and can respond to rapid change in an informed manner can gain a competitive advantage During the early 90's, workflow technologies offered a transversal integration capacity to the enterprise applications. Today, to "integrate" enterprise applications -and the activities they support- into business processes is not sufficient. The architecture of this integration should also be flexible. Enterprise requirements highlight flexible and adaptive processes whose execution can evolve (i) according to situations that cannot always be prescribed, and/or (ii) according to business changes (organizational, process improvement, strategic ...). More recent works highlight requirements in term of flexible and adaptive workflows, whose execution can evolve according to situations that cannot always be prescribed. This paper presents the state of the art for flexible business process management systems and criteria for comparing them. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.2496 | 0.04992 | 0.022691 | 0.006563 | 0.000973 | 0.000101 | 0.000017 | 0.000002 | 0 | 0 | 0 | 0 | 0 | 0 |
Stability analysis of uncertain sampled-data systems with incremental delay using looped-functionals The robust stability analysis of asynchronous and uncertain sampled-data systems with constant incremental input delay is addressed in the looped-functional framework. These functionals have been shown to be suitable for the analysis of impulsive systems as they allow one to express discrete-time stability conditions in an affine way, enabling then the consideration of uncertain and time-varying systems. The stability conditions are obtained by first reformulating the sampled-data system as an impulsive system, and by then considering a tailored looped-functional along with Wirtinger's inequality, a recently introduced inequality that has been shown to be less conservative than Jensen's inequality. Several examples are given for illustration. | Stabilization by using artificial delays: An LMI approach. Static output-feedback stabilization for the nth order vector differential equations by using artificial multiple delays is considered. Under assumption of the stabilizability of the system by a static feedback that depends on the output and its derivatives up to the order n−1, a delayed static output-feedback is found that stabilizes the system. The conditions for the stability analysis of the resulting closed-loop system are given in terms of simple LMIs. It is shown that the LMIs are always feasible for appropriately chosen gains and small enough delays. Robust stability analysis in the presence of uncertain time-varying delays and stochastic perturbation of the system coefficients is provided. Numerical examples including chains of three and four integrators that are stabilized by static output-feedbacks with multiple delays illustrate the efficiency of the method. | An IQC Approach to Robust Stability of Aperiodic Sampled-Data Systems. Conditions for robust stability of sampled-data systems with non-uniform sampling patterns and structural uncertainties are derived. The problem is tackled under the integral quadratic constraint (IQC) framework, where the aperiodic sampling operation is modelled by an delay-integration operator. Characterization based on integral quadratic constrains (IQC) is identified for this operator and the IQC theory is applied to derive convex stability criteria. Compared to the dominating Lyapunov approach where the candidate Lyapunov-Krasovskii functionals or looped functionals need to be tailored for the systems under consideration and therefore the stability conditions need to be re-derived whenever additional uncertainties are considered, the proposed approach has the advantage of avoiding such endeavor. Numerical examples are given to illustrate this main point and effectiveness of the proposed approach. | Exponential synchronization of a class of neural networks with sampled-data control.
This paper investigates the problem of the master-slave synchronization for a class of neural networks with discrete and distributed delays under sampled-data control. By introducing some new terms, a novel piecewise time-dependent Lyapunov-Krasovskii functional (LKF) is constructed to fully capture the available characteristics of real sampling information and nonlinear function vector of the system. Based on the LKF and Wirtinger-based inequality, less conservative synchronization criteria are obtained to guarantee the exponential stability of the error system, and then the slave system is synchronized with the master system. The designed sampled-data controller can be obtained by solving a set of linear matrix inequalities (LMIs), which depend on the maximum sampling period and the decay rate. The criteria are less conservative than the ones obtained in the existing works. A numerical example is presented to illustrate the effectiveness and merits of the proposed method. | New stability conditions for systems with distributed delays In the present paper, sufficient conditions for the exponential stability of linear systems with infinite distributed delays are presented. Such systems arise in population dynamics, in traffic flow models, in networked control systems, in PID controller design and in other engineering problems. In the early Lyapunov-based analysis of systems with distributed delays (Kolmanovskii & Myshkis, 1999), the delayed terms were treated as perturbations, where it was assumed that the system without the delayed term is asymptotically stable. Later, for the case of constant kernels and finite delays, less conservative conditions were derived under the assumption that the corresponding system with the zero-delay is stable (Chen & Zheng, 2007). We will generalize these results to the infinite delay case by extending the corresponding Jensen's integral inequalities and Lyapunov-Krasovskii constructions. Our main challenge is the stability conditions for systems with gamma-distributed delays, where the delay is stabilizing, i.e. the corresponding system with the zero-delay as well as the system without the delayed term are not asymptotically stable. Here the results are derived by using augmented Lyapunov functionals. Polytopic uncertainties in the system matrices can be easily included in the analysis. Numerical examples illustrate the efficiency of the method. Thus, for the traffic flow model on the ring, where the delay is stabilizing, the resulting stability region is close to the theoretical one found in Michiels, Morarescu, and Niculescu (2009) via the frequency domain analysis. | Wirtinger's inequality and Lyapunov-based sampled-data stabilization. Discontinuous Lyapunov functionals appeared to be very efficient for sampled-data systems (Fridman, 2010, Naghshtabrizi et al., 2008). In the present paper, new discontinuous Lyapunov functionals are introduced for sampled-data control in the presence of a constant input delay. The construction of these functionals is based on the vector extension of Wirtinger’s inequality. These functionals lead to simplified and efficient stability conditions in terms of Linear Matrix Inequalities (LMIs). The new stability analysis is applied to sampled-data state-feedback stabilization and to a novel sampled-data static output-feedback problem, where the delayed measurements are used for stabilization. | Synchronization of Lur'e systems via stochastic reliable sampled-data controller. This paper deals with the problem of synchronization methods for Lur׳e system with stochastic reliable sampled-data control scheme. By constructing the framework of linear matrix inequalities (LMIs) based on Lyapunov method and utilizing Wirtinger-based integral inequality (WBI), Jensen׳s inquality (JI), and other lemmas, synchronization criteria of sampled-data control for Lur׳e systems are derived and compared with the existing works. The necessity and validity of the proposed results are illustrated by two numerical examples. | Relaxed conditions for stability of time-varying delay systems. In this paper, the problem of delay-dependent stability analysis of time-varying delay systems is investigated. Firstly, a new inequality which is the modified version of free-matrix-based integral inequality is derived, and then by aid of this new inequality, two novel lemmas which are relaxed conditions for some matrices in a Lyapunov function are proposed. Based on the lemmas, improved delay-dependent stability criteria which guarantee the asymptotic stability of the system are presented in the form of linear matrix inequality (LMI). Two numerical examples are given to describe the less conservatism of the proposed methods. | Multiple integral inequalities and stability analysis of time delay systems. This paper is devoted to stability analysis of continuous-time delay systems based on a set of Lyapunov–Krasovskii functionals. New multiple integral inequalities are derived that involve the famous Jensen’s and Wirtinger’s inequalities, as well as the recently presented Bessel–Legendre inequalities of Seuret and Gouaisbaut (2015) and the Wirtinger-based multiple-integral inequalities of Park et al. (2015) and Lee et al. (2015). The present paper aims at showing that the proposed set of sufficient stability conditions can be arranged into a bidirectional hierarchy of LMIs establishing a rigorous theoretical basis for comparison of conservatism of the investigated methods. Numerical examples illustrate the efficiency of the method. | Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies. | Synthetic texturing using digital filters | Applying the SCR requirements method to a weapons control panel: an experience report | A regional decomposition method for recognizing handprinted characters A regional decomposition method is proposed to facilitate pattern analysis and recognition. It splits a complicated pattern into several simple parts or sub-patterns, so that the pattern can be identified by examining the distinct parts. A complexity analysis is derived in this paper to prove the effectiveness of the regional decomposition method; mathematical and statistical formulas are also provided to evaluate the recognition rates of different parts. For a sample of 36 alphanumeric characters handprinted in 89 most common styles, the total mean recognition rates of parts have been found to be 30% higher than those obtained from subjective experiments | Matching with Externalities | 1.042689 | 0.04 | 0.027544 | 0.010022 | 0.005743 | 0.003197 | 0.000766 | 0.000151 | 0.000038 | 0 | 0 | 0 | 0 | 0 |
Efficient residual data coding in CABAC for HEVC lossless video compression. After the development of the next generation video coding standard, referred to as high efficiency video coding (HEVC), the joint collaborative team of the ITU-T video coding experts group and the ISO/IEC moving picture experts group has now also standardized a lossless extension of such a standard. HEVC was originally designed for lossy video compression, thus, not ideal for lossless video compression. In this paper, we propose an efficient residual data coding method for HEVC lossless video compression. Based on the fact that there are statistical differences of residual data between lossy and lossless coding, we improved the HEVC lossless coding using sample-based angular prediction (SAP), modified level binarization, and binarization table selection with the weighted sum of previously encoded level values. Experimental results show that the proposed method provides high compression ratio up to 11.32 and reduces decoding complexity. | Improved lossless intra coding for H.264/MPEG-4 AVC. A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project. | Edge-directed prediction for lossless compression of natural images This paper sheds light on the least-square (LS)-based adaptive prediction schemes for lossless compression of natural images. Our analysis shows that the superiority of the LS-based adaptation is due to its edge-directed property, which enables the predictor to adapt reasonably well from smooth regions to edge areas. Recognizing that LS-based adaptation improves the prediction mainly around the edge areas, we propose a novel approach to reduce its computational complexity with negligible performance sacrifice. The lossless image coder built upon the new prediction scheme has achieved noticeably better performance than the state-of-the-art coder CALIC with moderately increased computational complexity | The LOCO-I lossless image compression algorithm: principles and standardization into JPEG-LS LOCO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. It is conceived as a “low complexity projection” of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm “enjoys the best of both worlds.” It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golomb (1966) type codes, which are adaptively chosen, and an embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCO-I, and its standardization into JPEC-LS | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | A semantics of multiple inheritance this paper is to present a clean semantics of multiple inheritance and to show that, in the context of strongly-typed, statically-scoped languages, a sound typechecking algorithm exists. Multiple inheritance is also interpreted in a broad sense: instead of being limited to objects, it is extended in a natural way to union types and to higher-order functional types. This constitutes a semantic basis for the unification of functional and object-oriented programming. | The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism. | A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section. | A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds. | A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general. | Inheritance of proofs The Curry-Howard isomorphism, a fundamental property shared by many type theories, establishes a direct correspondence between programs and proofs. This suggests that the same structuring principles that ease programming should be useful for proving as well. To exploit object-oriented structuring mechanisms for verification, we extend the object-model of Pierce and Turner, based on the higher-order typed X-calculus F less than or equal to(omega), with a logical component. By enriching the (functional) signature of objects with a specification, methods and their correctness proofs are packed together in objects. The uniform treatment of methods and proofs gives rise in a natural way to object-oriented proving principles - including inheritance of proofs, late binding of proofs, and encapsulation of proofs - as analogues to object-oriented programming principles. We have used Lego, a type-theoretic proof checker, to explore the feasibility of this approach. (C) 1998 John Wiley & Sons, Inc. | Software engineering for parallel systems Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1.2 | 0.01 | 0.003226 | 0.000329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A Refinement Calculus for Shared-Variable Parallel and Distributed Programming Parallel computers have not yet had the expected impact on mainstream computing. Parallelism adds a level of complexity to the programming task that makes it very error-prone. Moreover, a large variety of very dierent parallel architectures exists. Porting an implementation from one machine to another may require substantial changes. This paper addresses some of these problems by developing a formal basis for the design of parallel programs in the form of a renement calculus. The calculus allows the stepwise formal derivation of an abstract, low-level implementation from a trusted, high-level specication. The calculus thus helps structuring and documenting the development process. Portability is increased, because the introduction of a machine-dependent feature can be located in the renement tree. Development eorts above this point in the tree are independent of that feature and are thus reusable. Moreover, the discovery of new, possibly more ecient solutions is facilitated. Last but not least, programs are correct by construction, which obviates the need for dicult debugging. Our programming/specication notation supports fair parallelism, shared-variable and message-passing concurrency, local variables and channels. The calculus rests on a compositional trace semantics that treats shared-variable and message-passing concurrency uniformly. The renement relation combines a context-sensitive notion of trace inclusion and assumption-commitment reasoning to achieve compositionality. The calculus straddles both concurrency paradigms, that is, a shared-variable program can be rened into a distributed, message-passing program and vice versa. | Invariants, Well-Founded Statements and Real-Time Program Algebra. Program algebras based on Kleene algebra abstract the essential properties of programming languages in the form of algebraic laws. The proof of a refinement law may be expressed in terms of the algebraic properties of programs required for the law to hold, rather than directly in terms of the semantics of a language. This has the advantage that the law is then valid for any programming language that satisfies the axioms of the algebra. In this paper we explore the notion of well-founded statements and their relationship to well-founded relations and iterations. The laws about well-founded statements and relations are combined with invariants to derive a simpler proof of a while-loop introduction law. The algebra is then applied to a real-time programming language. The main difference is that tests within conditions and loops take time to evaluate and during that time the values of program inputs may change. This requires new definitions for conditionals and while loops but the proofs of the introduction laws for these constructs can still make use of the more basic algebraic properties of iterations. | Balancing expressiveness in formal approaches to concurrency One might think that specifying and reasoning about concurrent programs would be easier with more expressive languages. This paper questions that view. Clearly too weak a notation can mean that useful properties either cannot be expressed or their expression is unnatural. But choosing too powerful a notation also has its drawbacks since reasoning receives little guidance. For example, few would suggest that programming languages themselves provide tractable specifications. Both rely/guarantee methods and separation logic(s) provide useful frameworks in which it is natural to reason about aspects of concurrency. Rather than pursue an approach of extending the notations of either approach, this paper starts with the issues that appear to be inescapable with concurrency and--only as a response thereto--examines ways in which these fundamental challenges can be met. Abstraction is always a ubiquitous tool and its influence on how the key issues are tackled is examined in each case. | Generalised rely-guarantee concurrency: An algebraic foundation. The rely-guarantee technique allows one to reason compositionally about concurrent programs. To handle interference the technique makes use of rely and guarantee conditions, both of which are binary relations on states. A rely condition is an assumption that the environment performs only atomic steps satisfying the rely relation and a guarantee is a commitment that every atomic step the program makes satisfies the guarantee relation. In order to investigate rely-guarantee reasoning more generally, in this paper we allow interference to be represented by a process rather than a relation and hence derive more general rely-guarantee laws. The paper makes use of a weak conjunction operator between processes, which generalises a guarantee relation to a guarantee process, and introduces a rely quotient operator, which generalises a rely relation to a process. The paper focuses on the algebraic properties of the general rely-guarantee theory. The Jones-style rely-guarantee theory can be interpreted as a model of the general algebraic theory and hence the general laws presented here hold for that theory. | A Method for Refining Atomicity in Parallel Algorithms Parallel programs are described as action systems. These are basically nondeterministic do-od programs that can be executed in both a sequential and a parallel fashion. A method for refining the atomicity of actions in a parallel program is described. This allows derivation of parallel programs by stepwise refinement, starting from an initial highl level and sequential program and ending in a parallel program for shared memory or message passing architectures. A calculus of refinements is used as a framework for the derivation method. The notion of correctness being preserved by the refinements is total correctness. The method is especially suited for derivation of parallel algorithms for MIMD-type multiprocessor systems. | A theoretical basis for stepwise refinement and the programming calculus A uniform treatment of specifications, programs, and programming is presented. The treatment is based on adding a specification statement to a given procedural language and defining its semantics. The extended language is thus a specification language and programs are viewed as a subclass of specifications. A partial ordering on specifications/programs corresponding to ‘more defined’ is defined. In this partial ordering the program/specification hybrids that arise in the construction of a program by stepwise refinement form a monotonic sequence. We show how Dijkstra's calculus for the derivation of programs corresponds to constructing this monotonic sequence. Formalizing the calculus thus gives some insight into the intellectual activity it demands and allows us to hint at further developments. | Data refinement by calculation Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but possibly more complex; the purpose of the data refinement in that case is to make progress in program construction from more abstract to more concrete formulations. A recent trend in program construction is to calculate programs from their specifications; that contrasts with proving that a given program satisfies some specification. We investigate to what extent the trend can be applied to data refinement. | Distributed snapshots: determining global states of distributed systems This paper presents an algorithm by which a process in a distributed system determines a global state of the system during a computation. Many problems in distributed systems can be cast in terms of the problem of detecting global states. For instance, the global state detection algorithm helps to solve an important class of problems: stable property detection. A stable property is one that persists: once a stable property becomes true it remains true thereafter. Examples of stable properties are “computation has terminated,” “ the system is deadlocked” and “all tokens in a token ring have disappeared.” The stable property detection problem is that of devising algorithms to detect a given stable property. Global state detection can also be used for checkpointing. | The Manchester prototype dataflow computer The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism. | TS/Scheme: distributed data structures in Lisp Without Abstract | Language Constructs for Data Partitioning and Distribution This article presents a survey of language features for distributed memory multiprocessor systems (DMMs), in particular, systems that provide features for data partitioning and distribution. In these systems the programmer is freed from consideration of the low-level details of the target architecture in that there is no need to program explicit processes or specify interprocess communication. Programs are written according to the shared memory programming paradigm but the programmer is required to specify, by means of directives, additional syntax or interactive methods, how the data of the program are decomposed and distributed. | Reasoning with Background Knowledge - A Three-Level Theory | LANSF: a protocol modelling environment and its implementation SUMMARY LANSF is a software package that was originally designed as a tool to investigate the behaviour of medium access control (MAC) level protocols. These protocols form an interesting class of distributed computations: timing of events is the key factor in them. The protocol definition language of LANSF is based on C, and protocols are specified (programmed) as collections of communicating, interrupt-driven processes. These specifications are executable: an event-driven emulator of MAC-level communication phenomena forms the foundation of the implementation. Some tools for debugging, testing, and validation of protocol specifications are provided. We present key features of LANSF at the syntactic level, comment informally on the semantics of these features, and highlight some implementation issues. A complete example of a LANSF application is discussed in the Appendix. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1.045339 | 0.047267 | 0.036022 | 0.016644 | 0.002363 | 0.00046 | 0.000029 | 0.000003 | 0 | 0 | 0 | 0 | 0 | 0 |
Designing intelligent agents to support universal accessibility of E-commerce services We propose the design of an intelligent agent to improve accessibility of E-commerce applications and Web sites for visually impaired individuals. An important feature of this design is the application of knowledge representation technology—specifically conceptual structures—to explicitly capture and represent the navigational semantics of each document. This allows one to view the process of navigating a complex HTML document—e.g., a document containing tables and complex forms—as solving goals with respect to an action theory, i.e., as a planning problem. This information will be used by agents to provide a level of intelligence in the navigation process, allow users to express high level navigation goals, solve such goals using planning techniques. Effectively, the navigation of the complex components of HTML documents will be done by agents, on behalf of the visually impaired user. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Stability of Discrete-Time Systems with Time-Varying Delays via a Novel Summation Inequality This paper is concerned with the stability analysis of discrete linear systems with time-varying delays. The novelty of the paper comes from the consideration of a new inequality which is less conservative than the celebrated Jensen inequality employed in the context of discretetime delay systems. This inequality is a discrete-time counterpart of the Wirtinger-based integral inequality that was recently employed for the improved analysis of continuous-tine systems with delays. However, differently from the continuous-time case, the proof of the new inequality is not based on the Wirtinger inequality. The method is also combined with an efficient representation of the improved reciprocally convex combination inequality in order to reduce the conservatism induced by the LMIs optimization setup. The effectiveness of the proposed result is illustrated by some classical examples from the literature. | Two novel general summation inequalities to discrete-time systems with time-varying delay. This paper presents two novel general summation inequalities, respectively, in the upper and lower discrete regions. Thanks to the orthogonal polynomials defined in different inner spaces, various concrete single/multiple summation inequalities are obtained from the two general summation inequalities, which include almost all of the existing summation inequalities, e.g., the Jensen, the Wirtinger-based and the auxiliary function-based summation inequalities. Based on the new summation inequalities, a less conservative stability condition is derived for discrete-time systems with time-varying delay. Numerical examples are given to show the effectiveness of the proposed approach. | Stability analysis of continuous-time systems with time-varying delay using new Lyapunov-Krasovskii functionals. This paper studies the stability of linear continuous-time systems with time-varying delay by employing new Lyapunov–Krasovskii functionals. Based on the new Lyapunov–Krasovskii functionals, more relaxed stability criteria are obtained. Firstly, in order to coordinate with the use of the third-order Bessel-Legendre inequality, a proper quadratic functional is constructed. Secondly, two couples of integral terms {∫t−htsx(s)ds,∫stx(s)ds} and {∫t−hMsx(s)ds,∫st−htx(s)ds} are involved in the integral functionals ∫t−htt(·)ds and ∫t−hMt−ht(·)ds, respectively, so that the coupling information between them can be fully utilized. Finally, two commonly-used numerical examples are given to demonstrate the effectiveness of the proposed method. | Discrete inequalities based on multiple auxiliary functions and their applications to stability analysis of time-delay systems This paper presents new discrete inequalities for single summation and double summation. These inequalities are based on multiple auxiliary functions and include the Jensen discrete inequality and the discrete Wirtinger-based inequality as special cases. An application of these discrete inequalities to analyze stability of linear discrete systems with an interval time-varying delay is studied and a less conservative stability condition is obtained. Three numerical examples are given to show the effectiveness of the obtained stability condition. | Abel lemma-based finite-sum inequality and its application to stability analysis for linear discrete time-delay systems This paper is concerned with stability of linear discrete time-delay systems. Note that a tighter estimation on a finite-sum term appearing in the forward difference of some Lyapunov functional leads to a less conservative delay-dependent stability criterion. By using Abel lemma, a novel finite-sum inequality is established, which can provide a tighter estimation than the ones in the literature for the finite-sum term. Applying this Abel lemma-based finite-sum inequality, a stability criterion for linear discrete time-delay systems is derived. It is shown through numerical examples that the stability criterion can provide a larger admissible maximum upper bound than stability criteria using a Jensen-type inequality approach and a free-weighting matrix approach. | A survey of linear matrix inequality techniques in stability analysis of delay systems Recent years have witnessed a resurgence of research interests in analysing the stability of time-delay systems. Many results have been reported using a variety of approaches and techniques. However, much of the focus has been laid on the use of the Lyapunov-Krasovskii theory to derive sufficient stability conditions in the form of linear matrix inequalities. The purpose of this article is to survey the recent results developed to analyse the asymptotic stability of time-delay systems. Both delay-independent and delay-dependent results are reported in the article. Special emphases are given to the issues of conservatism of the results and computational complexity. Connections of certain delay-dependent stability results are also discussed. | Stability of linear systems with general sawtooth delay It is well known that in many particular systems, the upper bound on a certain time-varying delay that preserves the stability may be higher than the corresponding bound for the constant delay. Moreover, sometimes oscillating delays improve the performance (Michiels, W., Van Assche, V. & Niculescu, S. (2005) Stabilization of time-delay systems with a controlled time-varying delays and applications... | Static output feedback control of nonhomogeneous Markovian jump systems with asynchronous time delays. This paper investigates the problem of static output feedback control for a class of nonhomogeneous Markovian jump system (NMJS) with asynchronous time delays (ATDs). Since the ATDs subject to uncertain transition probabilities (TPs) are taking into account in a practical phenomenon, new approaches are introduced to deal with the ATDs characterized by nonhomogeneous Markov processes. It is assumed that the communication links are not perfect due to its detrimental effect on the performance of systems. Stochastic variables are presented to characterize the data transmission, which are depending on operation modes and satisfying the Bernoulli distribution. Sets of slack variables are adopted to decouple the product terms between system matrices and Lyapunov matrices. Based on an extended Lyapunov function combined with Finsler inequality approach, the robust static output-feedback controller is designed for the closed-loop NMJS. Finally, a numerical example is provided to verify the design method. | An improved delay-partitioning approach to stability criteria for generalized neural networks with interval time-varying delays. This paper deals with the problem of stability analysis for generalized delayed neural networks with interval time-varying delays based on the delay-partitioning approach. By constructing a suitable Lyapunov–Krasovskii functional with triple- and four-integral terms and using Jensen’s inequality, Wirtinger-based single- and double-integral inequality technique and linear matrix inequalities (LMIs), which guarantees asymptotic stability of addressed neural networks. This LMI can be easily solved via convex optimization algorithm. The novelty of this paper is that the consideration of a new integral inequalities and Lyapunov–Krasovskii functional is shown to be less conservatism, and it takes fully the relationship between the terms in the Leibniz–Newton formula within the framework of LMIs. Moreover, it is assumed that the lower bound of the time-varying delay is not restricted to be zero. Finally, several interesting numerical examples are given to demonstrate the effectiveness and less conservativeness of our theoretical results over well-known examples existing in recent literature. | Stochastic image warping for improved watermark desynchronization The use of digital watermarking in real applications is impeded by the weakness of current available algorithms against signal processing manipulations leading to the desynchronization of the watermark embedder and detector. For this reason, the problem of watermarking under geometric attacks has received considerable attention throughout recent years. Despite their importance, only few classes of geometric attacks are considered in the literature, most of which consist of global geometric attacks. The random bending attack contained in the Stirmark benchmark software is the most popular example of a local geometric transformation. In this paper, we introduce two new classes of local desynchronization attacks (DAs). The effectiveness of the new classes of DAs is evaluated from different perspectives including perceptual intrusiveness and desynchronization efficacy. This can be seen as an initial effort towards the characterization of the whole class of perceptually admissible DAs, a necessary step for the theoretical analysis of the ultimate performance reachable in the presence of watermark desynchronization and for the development of a new class of watermarking algorithms that can efficiently cope with them. | Specifying dynamic support for collaborative work within WORLDS In this paper, we present a specification language developed for WORLDS, a next generation computer-supported collaborative work system. Our specification language, called Introspect, employs a meta-level architecture to allow run-time modifications to specifications. We believe such an architecture is essential to WORLDS' ability to provide dynamic support for collaborative work in an elegant fashion. | Formal validation of viewpoint specifications How can we be sure that a set of viewpoints is valid, in the sense that it is possible to build a system consistent with each and every one of them? Our approach is based on the idea of amalgamating the individual viewpoints into a single coherent whole. A formal study of this process leads to a proposed approach for combining viewpoints that identifies conditions under which the resulting specification reflects all the properties of the constituent viewpoints. These ideas are applied to the development of Z specifications, and it is shown how they might be used in other contexts | Toward objective, systematic design-method comparisons Software design methodologies (SDMs) suggest ways to improve productivity and quality. They are collections of complementary design methods and rules for applying them. A base framework and modeling formalism to help designers compare SDMs and define what design issues different SDMs address, which of their components address similar design issues, and ways to integrate the best characteristics of each to make a cleaner, more comprehensive and flexible SDM are presented. The use of formalism and framework and the evaluation of objectivity and completeness using the type and function frameworks are described.<> | Context based lossless coder based on RLS predictor adaption scheme In the paper highly efficient context image lossless coder of moderate complexity is presented. Three main plus few auxiliary contexts are described. Predictors are adaptive, enhanced RLS coefficient update formula is implemented. A stage of NLMS prediction is added. Prediction error bias is removed using a robust multi-source approach. An advanced adaptive context arithmetic coder is applied. Experimental results show that indeed, the new coder is both more effective and faster than other state-of-the-art algorithms. | 1.008387 | 0.007692 | 0.007692 | 0.006853 | 0.002962 | 0.001559 | 0.000834 | 0.000256 | 0.000049 | 0 | 0 | 0 | 0 | 0 |
A computer-aided prototyping system A description is given of an approach to rapid prototyping that uses a specification language (the Prototype-System Description Language, PSDL) integrated with a set of software tools. including an execution support system, a rewrite system, a syntax-directed editor with graphics capabilities, a software base, a design database, and a design-management system. The prototyping language lets the designer use dataflow diagrams with nonprocedural control constraints as part of the specification of a hierarchically structured prototype. The resulting description is free from programming-level details, in contrast to prototypes constructed with a programming language. The discussion covers the language and method, rewrite subsystem, design manager, software base, and execution support.<> | Indexing hypertext documents in context | An Engineering Approach to Hard Real-Time System Design This paper presents a systematic methodology for the design of distributed fault tolerant real-time systems. The methodology covers the stepwise refinement of the given requirements, expressed in the form of real-time transactions, to task and protocol executions. It also includes a timing analysis and dependability evaluation of the still incomplete design. The testability of the evolving system is considered to be of essential concern. A set of coherent tools for the support of the methodology is described in some detail. The methodology assumes that the run-time architecture is based on static scheduling and a globally synchronised time-base is available to co-ordinate the system actions in the domain of real-time. | Reusing analogous components Using formal specifications to represent software components facilitates the determination of reusability because they more precisely characterize the functionality of the software, and the well-defined syntax makes processing amenable to automation. This paper presents an approach, based on formal methods, to the search, retrieval, and modification of reusable software components. From a two-tiered hierarchy of reusable software components, the existing components that are analogous to the query specification are retrieved from the hierarchy. The specification for an analogous retrieved component is compared to the query specification to determine what changes need to be applied to the corresponding program component in order to make it satisfy the query specification. | A prototyping language for real-time software PSDL is a language for describing prototypes of real-time software systems. It is most useful for requirements analysis, feasibility studies, and the design of large embedded systems. PSDL has facilities for recording and enforcing timing constraints, and for modeling the control aspects of real-time systems using nonprocedural control constraints, operator abstractions, and data abstractions. The language has been designed for use with an associated prototyping methodology. PSDL prototypes are executable if supported by a software base containing reusable software components in an underlying programming language (e.g. Ada).<> | Incremental planning using conceptual graphs | Specifications are (preferably) executable The validation of software specifications with respect to explicit and implicit user requirements is extremely difficult. To ease the validation task and to give users immediate feedback of the behavior of the future software it was suggested to make specifications executable. However, Hayes and Jones (Hayes, Jones 89) argue that executable specifications should be avoided because executability can restrict the expressiveness of specification languages, and can adversely affect implementations. In this paper I will argue for executable specifications by showing that non-executable formal specifications can be made executable on almost the same level of abstraction and without essentially changing their structure. No new algorithms have to be introduced to get executability. In many cases the combination of property-orientation and search results in specifications based on the generate-and-test approach. Furthermore, I will demonstrate that declarative specification languages allow to combine high expressiveness and executability. | Contexts, Canons and Coreferent Types A major area of development in the field of knowledge representation is the idea of contexts. In a large system a reasoner can't re-examine everything known all the time. To deal with the shear size and complexity of large knowledge bases like CYC, a reasoner must be able to limit its search to some context relevant to the immediate problem at hand. | Supporting conflict resolution in cooperative design systems Complex modern-day artifacts are designed cooperatively by groups of experts, each with their own areas of expertise. The interaction of such experts inevitably involves conflict. This paper presents an implemented computational model, based on studies of human cooperative design, for supporting the resolution of such conflicts. This model is based centrally on the insights that general conflict resolution expertise exists separately from domain-level design expertise, and that this expertise can be instantiated in the context of particular conflicts into specific advice for resolving those conflicts. Conflict resolution expertise consists of a taxonomy of design conflict classes in addition to associated general advice suitable for resolving conflicts in these classes. The abstract nature of conflict resolution expertise makes it applicable to a wide variety of design domains. This paper describes this conflict resolution model and provides examples of its operation from an implemented cooperative design system for local area network design that uses machine- based design agents. How this model is being extended to support and learn from collaboration of human design agents is also discussed. | Inquiry-Based Requirements Analysis This approach emphasizes pinpointing where and when information needs occur; at its core is the inquiry cycle model, a structure for describing and supporting discussions about system requirements. The authors use a case study to describe the model's conversation metaphor, which follows analysis activities from requirements elicitation and documentation through refinement. | Patterns of large software systems: failure and success Software management consultants have something in common with physicians: both are much more likely to be called in when there are serious problems rather than when everything is fine. Examining large software systems-those in excess of 5000 function points (which is roughly 500000 source code statements in a procedural programming language such as Cobol or Fortran)-that are in trouble is very common for management consultants. Unfortunately, the systems are usually already late, over budget, and showing other signs of acute distress before the study begins. The consultant engagements, therefore, serve to correct the problems and salvage the system-if, indeed, salvaging is possible. The failure or cancellation rate of large software systems is over 20 percent. Of those that are completed, about two thirds experience schedule delays and cost overruns that may approach 100 percent. Roughly the same number are plagued by low reliability and quality problems in the first year of deployment. Yet some large systems finish early, meet their budgets, and have few, if any, quality problems. How do these projects succeed, when so many fail? | Usability analysis with Markov models How hard to users to find interactive devices to use to achieve their goals, and how can we get this information early enough to influence design? We show that Markov modeling can obtain suitable measures, and we provide formulas that can be used for a large class of systems. We analyze and consider alternative designs for various real examples. We introduce a “knowledege/usability graph,” which shows the impact of even a smaller amount of knowledge for the user, and the extent to which designers' knowledge may bias their views of usability. Markov models can be built into design tools, and can therefore be made very convenient for designers to utilize. One would hope that in the future, design tools would include such mathematical analysis, and no new design skills would be required to evaluate devices. A particular concern of this paper is to make the approach accessible. Complete program code and all the underlying mathematics are provided in appendices to enable others to replicate and test all results shown. | Quality prediction and assessment for product lines In recent years, software product lines have emerged as a promising approach to improve software development productivity in IT industry. In the product line approach, we identify both commonalities and variabilities in a domain, and build generic assets for an organization. Feature diagrams are often used to model common and variant product line requirements and can be considered part of the organizational assets. Despite their importance, quality attributes (or non-functional requirements, NFRs) such as performance and security have not been sufficiently addressed in product line development. A feature diagram alone does not tell us how to select a configuration of variants to achieve desired quality attributes of a product line member. There is a lack of an explicit model that can represent the impact of variants on quality attributes. In this paper, we propose a Bayesian Belief Network (BBN) based approach to quality prediction and assessment for a software product line. A BBN represents domain experts' knowledge and experiences accumulated from the development of similar projects. It helps us capture the impact of variants on quality attributes, and helps us predict and assess the quality of a product line member by performing quantitative analysis over it. For developing specific systems, members of a product line, we reuse the expertise captured by a BBN instead of working from scratch. We use examples from the Computer Aided Dispatch (CAD) product line project to illustrate our approach. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.023529 | 0.018591 | 0.018182 | 0.009312 | 0.004791 | 0.003785 | 0.00141 | 0.000101 | 0.000043 | 0.000019 | 0.000002 | 0 | 0 | 0 |
Massively Parallel Lossless Compression Of Medical Images Using Least-Squares Prediction And Arithmetic Coding Medical imaging in hospitals requires fast and efficient image compression to support the clinical work flow and to save costs. Least-squares autoregressive pixel prediction methods combined with arithmetic coding constitutes the state of the art in lossless image compression. However, a high computational complexity of both prevents the application of respective CPU implementations in practice. We present a massively parallel compression system for medical volume images which runs on graphics cards. Image blocks are processed independently by separate processing threads. After pixel prediction with specialized border treatment, prediction errors are entropy coded with an adaptive binary arithmetic coder. Both steps are designed to match particular demands of the parallel hardware architecture. Comparisons with current image and video coders show efficiency gains of 3.3-13.6% while compression times can be reduced to a few seconds. | Hierarchical Oriented Predictions for Resolution Scalable Lossless and Near-Lossless Compression of CT and MRI Biomedical Images We propose a new hierarchical approach to resolution scalable lossless and near-lossless (NLS) compression. It combines the adaptability of DPCM schemes with new hierarchical oriented predictors to provide resolution scalability with better compression performances than the usual hierarchical interpolation predictor or the wavelet transform. Because the proposed hierarchical oriented prediction (HOP) is not really efficient on smooth images, we also introduce new predictors, which are dynamically optimized using a least-square criterion. Lossless compression results, which are obtained on a large-scale medical image database, are more than 4% better on CTs and 9% better on MRIs than resolution scalable JPEG-2000 (J2K) and close to nonscalable CALIC. The HOP algorithm is also well suited for NLS compression, providing an interesting rate-distortion tradeoff compared with JPEG-LS and equivalent or a better PSNR than J2K for a high bit rate on noisy (native) medical images. | Adaptive sequential prediction of multidimensional signals with applications to lossless image coding. We investigate the problem of designing adaptive sequential linear predictors for the class of piecewise autoregressive multidimensional signals, and adopt an approach of minimum description length (MDL) to determine the order of the predictor and the support on which the predictor operates. The design objective is to strike a balance between the bias and variance of the prediction errors in the MDL criterion. The predictor design problem is particularly interesting and challenging for multidimensional signals (e.g., images and videos) because of the increased degree of freedom in choosing the predictor support. Our main result is a new technique of sequentializing a multidimensional signal into a sequence of nested contexts of increasing order to facilitate the MDL search for the order and the support shape of the predictor, and the sequentialization is made adaptive on a sample by sample basis. The proposed MDL-based adaptive predictor is applied to lossless image coding, and its performance is empirically established to be the best among all the results that have been published till present. | Data compression using adaptive coding and partial string matching The recently developed technique of arithmetic coding, in conjunction with a Markov model of the source, is a powerful method of data compression in situations where a linear treatment is inap- propriate. Adaptive coding allows the model to be constructed dy- namically by both encoder and decoder during the course of the transmission, and has been shown to incur a smaller coding overhead than explicit transmission of the model's statistics. But there is a basic conflict between the desire to use high-order Markov models and the need to have them formed quickly as the initial part of the message is sent. This paper describes how the conflict can be resolved with partial string matching, and reports experimental results which show that mixed-case English text can be coded in as little as 2.2 bits/ character with no prior knowledge of the source. | The LOCO-I lossless image compression algorithm: principles and standardization into JPEG-LS LOCO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. It is conceived as a “low complexity projection” of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm “enjoys the best of both worlds.” It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golomb (1966) type codes, which are adaptively chosen, and an embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCO-I, and its standardization into JPEC-LS | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | The use of goals to surface requirements for evolving systems This paper addresses the use of goals to surface requirements for the redesign of existing or legacy systems. Goals are widely recognized as important precursors to system requirements, but the process of identifying and abstracting them has not been researched thoroughly. We present a summary of a goal-based method (GBRAM) for uncovering hidden issues, goals, and requirements and illustrate its application to a commercial system, an Intranet-based electronic commerce application, evaluating the method in the process. The core techniques comprising GBRAM are the systematic application of heuristics and inquiry questions for the analysis of goals, scenarios and obstacles. We conclude by discussing the lessons learned through applying goal refinement in the field and the implications for future research. | Petri nets: Properties, analysis and applications Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets | Design problem solving: a task analysis I propose a task structure for design by analyzing a general class of methods that I call propose- critique-modify methods. The task structure is constructed by identifying a range of methods for each task. For each method, the knowledge needed and the subtasks that it sets up are iden- tified. This recursive style of analysis provides a framework in which we can understand a number of particular proposals for design prob- lem solving as specific combinations of tasks, methods, and subtasks. Most of the subtasks are not really specific to design as such. The analy- sis shows that there is no one ideal method for design, and good design problem solving is a result of recursively selecting methods based on a number of criteria, including knowledge avail- ability. How the task analysis can help in knowledge acquisition and system design is dis- cussed. | WebWork: METEOR2's Web-Based Workflow Management System. METEOR workflow management systems consist of both (1) design/build-time and (2) run-time/enactment components for implementing workflow applications. An enactment system provides the command, communication and control for the individual tasks in the workflow. Tasks are the run-time instances of intra- or inter-enterprise applications. We are developing three implementations of the METEOR model: WebWork, OrbWork and NeoWork. This paper discusses WebWork, an implementation relying solely on Web technology as the infrastructure for the enactment system. WebWork supports a distributed implementation with participation of multiple Web servers. It also supports automatic code generation of workflow applications from design specifications produced by a comprehensive graphical designer. WebWork has been developed as a complement of its more heavyweight counterparts (OrbWork and NeoWork), with the goal of providing ease of workflow application development, installation, use and maintenance. At the time of this writing, WebWork has been installed by several of the LSDIS Lab's industrial partners for testing, evaluation and building workflow applications. | Knowledge-based and statistical approaches to text retrieval Major research issues in information retrieval are reviewed, and developments in knowledge-based approaches are described. It is argued that although a fair amount of work has been done, the effectiveness of this approach has yet to be demonstrated. It is suggested that statistical techniques and knowledge-based approaches should be viewed as complementary, rather than competitive.<> | The multiway rendezvous The multiway rendezvous is a natural generalization of the rendezvous in which more than two processes may participate. The utility of the multiway rendezvous is illustrated by solutions to a variety of problems. To make their simplicity apparent, these solutions are written using a construct tailor-made to support the multiway rendezvous. The degree of support for multiway rendezvous applications by several well-known languages that support the two-way rendezvous is examined. Since such support for the multiway rendezvous is found to be inadequate, well-integrated extensions to these languages are considered that would help provide such support. | Developing Mode-Rich Satellite Software by Refinement in Event B
To ensure dependability of on-board satellite systems, the designers should, in particular, guarantee correct implementation
of the mode transition scheme, i.e., ensure that the states of the system components are consistent with the global system
mode. However, there is still a lack of scalable approaches to formal verification of correctness of complex mode transitions.
In this paper we present a formal development of an Attitude and Orbit Control System (AOCS) undertaken within the ICT DEPLOY
project. AOCS is a complex mode-rich system, which has an intricate mode-transition scheme. We show that refinement in Event
B provides the engineers with a scalable formal technique that enables both development of mode-rich systems and proof-based
verification of their mode consistency. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.2 | 0.028571 | 0.02 | 0.010526 | 0.000329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Maintaining a legacy: towards support at the architectural level An organization that develops large, software intensive systems with a long lifetime will encounter major changes in the market requirements, the software development environment, including its platform, and the target platform. In order to meet the challenges associated with these changes, software development has to undergo major changes as well, Especially when these systems are successful, and hence become an asset, particular care shall be taken to maintain this legacy; large systems with a long lifetime tend to become very complex and difficult to understand. Software architecture plays a vital role in the development of large software systems. For the purpose of maintenance, an up-to-date explicit description of the software architecture of a system supports understanding and comprehension of it, amongst other things. However, many large! complex systems do not have an up-to-date documented software architecture. Particularly in cases where these systems have a long lifetime, the (natural) turnover of personnel will make it very likely that many employees contributing to previous generations of the system are no longer available. A need to 'recover' the software architecture of the system may become prevalent, facilitating the understanding of the system, providing ways to improve its maintainability and quality and to control architectural changes. This paper gives an overview of an on-going effort to improve the maintainability and quality of a legacy system, and describes the recent introduction of support at the architectural level for program understanding and complexity control. Copyright (C) 2000 John Wiley & Sons, Ltd. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Lossless hyperspectral image compression using intraband and interband predictors On-board data compression is a critical task that has to be carried out with restricted computational resources for remote sensing applications. This paper proposes an improved algorithm for onboard lossless compression of hyperspectral images, which combines low encoding complexity and high-performance. This algorithm is based on hybrid prediction. In the proposed work, the decorrelation stage reinforces both intraband and interband predictions. The intraband prediction uses the median prediction model, since the median predictor is fast and efficient. The interband prediction uses hybrid context prediction which is the combination of a linear prediction (LP) and a context prediction. Eventually, the residual image of hybrid context prediction is coded by the Huffman coding. An efficient hardware implementation of both predictors is achieved using FPGA-based acceleration and power analysis has been done to estimate the power consumption. Performance of the proposed algorithm is compared with some of the standard algorithms for hyperspectral images such as 3D-CALIC, M-CALIC, LUT, LAIS-LUT, LUT-NN, DPCM (C-DPCM), JPEG-LS. Experimental results on AVIRIS data show that the proposed algorithm achieves high compression ratio with low complexity and computational cost. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
New delay-dependent stability criteria for recurrent neural networks with time-varying delays. This work is concerned with the delay-dependentstability problem for recurrent neural networks with time-varying delays. A new improved delay-dependent stability criterion expressed in terms of linear matrix inequalities is derived by constructing a dedicated Lyapunov–Krasovskii functional via utilizing Wirtinger inequality and convex combination approach. Moreover, a further improved delay-dependent stability criterion is established by means of a new partitioning method for bounding conditions on the activation function and certain new activation function conditions presented. Finally, the application of these novel results to an illustrative example from the literature has been investigated and their effectiveness is shown via comparison with the existing recent ones. | Orthogonal-polynomials-based integral inequality and its applications to systems with additive time-varying delays. Recently, a polynomials-based integral inequality was proposed by extending the Moon’s inequality into a generic formulation. By imposing certain structures on the slack matrices of this integral inequality, this paper proposes an orthogonal-polynomials-based integral inequality which has lower computational burden than the polynomials-based integral inequality while maintaining the same conservatism. Further, this paper provides notes on relations among recent general integral inequalities constructed with arbitrary degree polynomials. In these notes, it is shown that the proposed integral inequality is superior to the Bessel–Legendre (B–L) inequality and the polynomials-based integral inequality in terms of the conservatism and computational burden, respectively. Moreover, the effectiveness of the proposed method is demonstrated by an illustrative example of stability analysis for systems with additive time-varying delays. | Stochastic stability for distributed delay neural networks via augmented Lyapunov-Krasovskii functionals. This paper is concerned with the analysis problem for the globally asymptotic stability of a class of stochastic neural networks with finite or infinite distributed delays. By using the delay decomposition idea, a novel augmented Lyapunov–Krasovskii functional containing double and triple integral terms is constructed, based on which and in combination with the Jensen integral inequalities, a less conservative stability condition is established for stochastic neural networks with infinite distributed delay by means of linear matrix inequalities. As for stochastic neural networks with finite distributed delay, the Wirtinger-based integral inequality is further introduced, together with the augmented Lyapunov–Krasovskii functional, to obtain a more effective stability condition. Finally, several numerical examples demonstrate that our proposed conditions improve typical existing ones. | Global Asymptotic Stability for Delayed Neural Networks Using an Integral Inequality Based on Nonorthogonal Polynomials. This brief is concerned with global asymptotic stability of a neural network with a time-varying delay. First, by introducing an auxiliary vector with some nonorthogonal polynomials, a slack-matrix-based integral inequality is established, which includes some existing one as its special case. Second, a novel Lyapunov-Krasovskii functional is constructed to suit for the use of the obtained integral... | Stability analysis of delayed neural networks via a new integral inequality. This paper focuses on stability analysis for neural networks systems with time-varying delays. A more general auxiliary function-based integral inequality is established and some improved delay-dependent stability conditions formulated in terms of linear matrix inequalities (LMIs) are derived by employing a suitable LyapunovKrasovskii functional (LKF) and the novel integral inequality. Three well-known application examples are provided to demonstrate the effectiveness and improvements of the proposed method. | New approach on robust delay-dependent H∞ control for uncertain T-S fuzzy systems with interval time-varying delay This paper investigates the robust H∞ control for Takagi-Sugeno (T-S) fuzzy systems with interval time-varying delay. By employing a new and tighter integral inequality and constructing an appropriate type of Lyapunov functional, delay-dependent stability criteria are derived for the control problem. Because neither any model transformation nor free weighting matrices are employed in our theoretical derivation, the developed stability criteria significantly improve and simplify the existing stability conditions. Also, the maximum allowable upper delay bound and controller feedback gains can be obtained simultaneously from the developed approach by solving a constrained convex optimization problem. Numerical examples are given to demonstrate the effectiveness of the proposed methods. | New approach to stability criteria for generalized neural networks with interval time-varying delays. This paper is concerned with the problem of delay-dependent stability of delayed generalized continuous neural networks, which include two classes of fundamental neural networks, i.e., static neural networks and local field neural networks, as their special cases. It is assumed that the state delay belongs to a given interval, which means that the lower bound of delay is not restricted to be zero. An improved integral inequality lemma is proposed to handle the cross-product terms occurred in derivative of constructed Lyapunov–Krasovskii functional. By using the new lemma and delay partitioning method, some less conservative stability criteria are obtained in terms of LMIs. Numerical examples are finally given to illustrate the effectiveness of the proposed method over the existing ones. | Complete Quadratic Lyapunov functionals using Bessel-Legendre inequality The article is concerned with the stability analysis of time-delay systems using complete-Lyapunov functionals. This class of functionals has been employed in the literature because of their nice properties. Indeed, such a functional can be built if a system with a constant time delay is asymptotically stable. Hence, several articles aim at approximating their parameters thanks to a discretization method or polynomial modeling. The interest of such approximation is the design of tractable sufficient stability conditions expressed on the Linear Matrix Inequality or the Sum of Squares setups. In the present article, we provide an alternative method based on polynomial approximation which takes advantages of the Legendre polynomials and their properties. The resulting stability conditions are scalable with respect to the degree of the Legendre polynomials and are expressed in terms of a tractable LMI. | A novel stability analysis of linear systems under asynchronous samplings. This article proposes a novel approach to assess the stability of continuous linear systems with sampled-data inputs. The method, which is based on the discrete-time Lyapunov theorem, provides easy tractable stability conditions for the continuous-time model. Sufficient conditions for asymptotic and exponential stability are provided dealing with synchronous and asynchronous samplings and uncertain systems. An additional stability analysis is provided for the cases of multiple sampling periods and packet losses. Several examples show the efficiency of the method. | An image multiresolution representation for lossless and lossy compression We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bit-shift operations. During its calculation, the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropy-coding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for progressive transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate versus distortion performance is comparable to those of the most efficient lossy compression methods. | Proof rules and transformations dealing with fairness We provide proof rules enabling the treatment of two fairness assumptions in the context of Dijkstra's do-od-programs. These proof rules are derived by considering a transformed version of the original program which uses random assignments z ≔? and admits only fair computations. Various, increasingly complicated, examples are discussed. In all cases reasonably simple proofs can be given. The proof rules use well-founded structures corresponding to infinite ordinals and deal with the original programs and not their translated versions. | Requirements engineering in 2001: (virtually) managing a changing reality Trends in society and technology force requirements engineering to expand its role from a one-shot activity in the development process to a virtual image that accompanies the changing reality of a system. A maturing software market also requires a better understanding of the differentiation in market segments for requirements engineering and standardisation of methodologies within these segments. On the research side, this requires a coherent perspective of hitherto parallel research directions towards a comprehensive understanding of requirements processes, as well as the optimal exploitation of new technologies that support the main role of requirements engineering; mutual learning of all stakeholders concerned | Workshop on comparing description and frame logics The specification of reusable terminological knowledge is one of the key issues in today's knowledge engineering. Providing formal languages with precise semantics and inference support can significantly support this activity. The aim of the workshop was to understand and to compare existing approaches developed in other research communities. We investigated research on description languages and research on object-orient ed databases. Both provide the combination of rich terminological modeling primitives with well studied semantics and inference support. To better understand and compare them as well as to highlight common aspects and differences were the goals of the workshop. Knowledge-based systems (KBSs) consist of large amount of domain knowledge and problem-solving methods that describes the inference process of the system (28). The domain knowledge defines concepts, properties, relationships, heuristic rules, instances etc. that are necessary to define the application problem and its solution process. Recent work on ontologies aim at developing reusable terminological knowledge which improves knowledge | TUGAS: Exploiting Unlabelled Data for Twitter Sentiment Analysis | 1.051512 | 0.05 | 0.05 | 0.020533 | 0.014357 | 0.008893 | 0.002261 | 0.000223 | 0.000018 | 0 | 0 | 0 | 0 | 0 |
Involutions On Relational Program Calculi The standard Galois connection between the relational and predicate-transformer models of sequential programming (defined in terms of weakest precondition) confers a certain similarity between them. This paper investigates the extent to which the important involution on transformers (which, for instance, interchanges demonic and angelic nondeterminism, and reduces the two kinds of simulation in the relational model to one kind in the transformer model) carries over to relations. It is shown that no exact analogue exists; that the two complement-based involutions are too weak to be of much use; but that the translation to relations of transformer involution under the Galois connection is just strong enough to support Boolean-algebra style reasoning, a claim that is substantiated by proving properties of deterministic computations. Throughout, the setting is that of the guarded-command language augmented by the usual specification commands; and where possible algebraic reasoning is used in place of the more conventional semantic reasoning. | Binary Multirelations
Relational models for imperative programming languages provide a representation of commands in terms of binary input-output
relations over states. Various relational models have arisen from modelling decisions on the distinction between angelic-
and demonic nondeterminism, and have been shown to be isomorphic to disjunctive- or conjunctive predicate transformer semantics.
For commands with both angelic- and demonic nondeterminism it is known that monotone unary operators provide a predicate transformer
semantics but there is no conventional relational model. In this paper we propose a novel relational representation, in terms
of binary multirelations, for such commands. Then we show that binary multirelations and monotone unary operators are intertranslatable. | Duality in specification languages: a lattice-theoretical approach A very general lattice-based language of commands, based on theprimitive operations of substitution and test for equality, isconstructed. This base language permits unbounded nondeterminism,demonic and angelic nondeterminism. A dual language permitting miraclesis constructed. Combining these two languages yields an extended baselanguage which is complete, in the sense that all monotonic predicatetransformers can be constructed in it. The extended base languageprovides a unifying framework for various specification languages; weshow how two Dijkstra-style specification languages can be embedded init.—Authors' Abstract | A generalization of Dijkstra's calculus Dijsktra's calculus of guarded commands can be generalized and simplified by dropping the law of the excluded miracle. This paper gives a self-contained account of the generalized calculus from first principles through the semantics of recursion. The treatment of recursion uses the fixpoint method from denotational semantics. The paper relies only on the algebraic properties of predicates; individual states are not mentioned (except for motivation). To achieve this, we apply the correspondence between programs and predicates that underlies predicative programming.The paper is written from the axiomatic semantic point of view, but its contents can be described from the denotational semantic point of view roughly as follows: The Plotkin-Apt correspondence between wp semantics and the Smyth powerdomain is extended to a correspondence between the full wp/wlp semantics and the Plotkin powerdomain extended with the empty set. | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | Programmers use slices when debugging Computer programmers break apart large programs into smaller coherent pieces. Each of these pieces: functions, subroutines, modules, or abstract datatypes, is usually a contiguous piece of program text. The experiment reported here shows that programmers also routinely break programs into one kind of coherent piece which is not coniguous. When debugging unfamiliar programs programmers use program pieces called slices which are sets of statements related by their flow of data. The statements in a slice are not necessarily textually contiguous, but may be scattered through a program. | List processing in real time on a serial computer A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical implementations of list processing systems lack this property because allocating a list cell from the heap may cause a garbage collection, which process requires time proportional to the heap size to finish. A real-time list processing system is presented which continuously reclaims garbage, including directed cycles, while linearizing and compacting the accessible cells into contiguous locations to avoid fragmenting the free storage pool. The program is small and requires no time-sharing interrupts, making it suitable for microcode. Finally, the system requires the same average time, and not more than twice the space, of a classical implementation, and those space requirements can be reduced to approximately classical proportions by compact list representation. Arrays of different sizes, a program stack, and hash linking are simple extensions to our system, and reference counting is found to be inferior for many applications. | A new, fast, and efficient image codec based on set partitioning in hierarchical trees Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code | A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds. | Hex-splines: a novel spline family for hexagonal lattices This paper proposes a new family of bivariate, nonseparable splines, called hex-splines, especially designed for hexagonal lattices. The starting point of the construction is the indicator function of the Voronoi cell, which is used to define in a natural way the first-order hex-spline. Higher order hex-splines are obtained by successive convolutions. A mathematical analysis of this new bivariate spline family is presented. In particular, we derive a closed form for a hex-spline of arbitrary order. We also discuss important properties, such as their Fourier transform and the fact they form a Riesz basis. We also highlight the approximation order. For conventional rectangular lattices, hex-splines revert to classical separable tensor-product B-splines. Finally, some prototypical applications and experimental results demonstrate the usefulness of hex-splines for handling hexagonally sampled data. | Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects. | A Conceptual Graph Model for W3C Resource Description Framework With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web. | Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.2 | 0.2 | 0.009524 | 0.002353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Multifunctional software systems: Structured modeling and specification of functional requirements This paper deals with the structured specification of interface behavior of multifunctional systems, which are systems that offer a variety of functions for different purposes and use cases. It introduces a theory and first concepts of a methodology for the identification, structured modeling, and formalization of functional requirements of multifunctional systems. Service hierarchies specify multifunctional systems in terms of their provided sub-functions called services together with their mutual relationships and dependencies. A service hierarchy describes the functionality of multifunctional systems in a structured way. Each service is specified independently and the specification is added to the service hierarchy. Modes help to specify the feature interactions and by that functional dependencies between the services. The approach is based on the Focus theory for modeling interface behavior and services. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Entropy in Design Phase: A Higraph-Based Model Approach The exponential growing effort, cost and time investment of complex systems in modeling phase emphasize the need for a methodology, a framework and a environment to handle the system model complexity. For that, it is necessary to be able to measure the system model entropy. This paper highlights the requirements a model needs to fulfill to match human user expectations. It suggests a hierarchical graph-based formalism for modeling complex systems and presents transformations to handle the underlying complexity. Finally, a way to measure system model structural complexity based on Shannon theory of information is proposed. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Non-repetitive 3-coloring of subdivided graphs We show that every graph can be sub divided in a way that the resulting graph can be colored without repetitions on paths using only 3 colors. This extends the result of Thue asserting the existence of arbitrarily long non repetitive strings over a 3-letter alphabet. | The complexity of nonrepetitive edge coloring of graphs A squarefree word is a sequence w of symbols such that there are no strings x, y, and z for which w = xyyz. A nonrepetitive coloring of a graph is an edge coloring in which the sequence of colors along any open path is squarefree. The Thue number π(G) of a graph G is the least n for which the graph can be nonrepetitively colored in n colors. A number of recent papers have shown both exact and approximation results for Thue numbers of various classes of graphs. We show that determining whether a graph G has φ(G) ≤ k isp2-complete. When we restrict to paths of length at most n, the problem becomes NP-complete for fixed n. For n = 2, this is the edge coloring problem; thus the bounded-path version can be thought of as a generalization of edge coloring. | Thue choosability of trees A vertex colouring of a graph G is nonrepetitive if for any path P=(v"1,v"2,...,v"2"r) in G, the first half is coloured differently from the second half. The Thue choice number of G is the least integer @? such that for every @?-list assignment L of G, there exists a nonrepetitive L-colouring of G. We prove that for any positive integer @?, there is a tree T with @p"c"h(T)@?. On the other hand, it is proved that if G^' is a graph of maximum degree @D, and G is obtained from G^' by attaching to each vertex v of G^' a connected graph of tree-depth at most z rooted at v, then @p"c"h(G)@?c(@D,z) for some constant c(@D,d) depending only on @D and z. | Tree-depth, subgraph coloring and homomorphism bounds We define the notions tree-depth and upper chromatic number of a graph and show their relevance to local-global problems for graph partitions. In particular we show that the upper chromatic number coincides with the maximal function which can be locally demanded in a bounded coloring of any proper minor closed class of graphs. The rich interplay of these notions is applied to a solution of bounds of proper minor closed classes satisfying local conditions. In particular, we prove the following result:For every graph M and a finite set F of connected graphs there exists a (universal) graph U = U(M, F) ∈ Forbh(F) such that any graph G ∈ Forbh(F) which does not have M as a minor satisfies G → U (i.e. is homomorphic to U).This solves the main open problem of restricted dualities for minor closed classes and as an application it yields the bounded chromatic number of exact odd powers of any graph in an arbitrary proper minor closed class. We also generalize the decomposition theorem of DeVos et al. [M. DeVos, G. Ding, B. Oporowski, D.P. Sanders, B. Reed, P. Seymour, D. Vertigan, Excluding any graph as a minor allows a low tree-width 2-coloring, J. Combin. Theory Ser. B 91 (2004) 25-41]. | On Square-Free Edge Colorings Of Graphs An edge coloring of a graph is called square-free, if the sequence of colors on certain walks is not a square, that is not of the form x(1,)...,x(m), x(1),...,x(m), for any m epsilon N. Recently, various classes of walks have been suggested to be considered in the above definition. We construct graphs, for which the minimum number of colors needed for a square-free coloring is different if the considered set of walks vary, solving a problem posed by Bre ar and Klav2ar. We also prove the following: if an edge coloring of G is not square-free (even in the most general sense), then the length of the shortest square walk is, at most 8 vertical bar E(G)vertical bar(2). Hence, the necessary number of colors for a square-free coloring is algorithmically computable. | Nonrepetitive Colourings of Planar Graphs with $O(\log n)$ Colours A vertex colouring of a graph is nonrepetitive if there is no path for which the first half of the path is assigned the same sequence of colours as the second half. The nonrepetitive chromatic number of a graph G is the minimum integer k such that G has a nonrepetitive k-colouring. Whether planar graphs have bounded nonrepetitive chromatic number is one of the most important open problems in the field. Despite this, the best known upper bound is O(root n) for n-vertex planar graphs. We prove a O(log n) upper bound. | Formal Derivation of Strongly Correct Concurrent Programs. Summary A method is described for deriving concurrent programs which are consistent with the problem specifications and free from
deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of
synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant
and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences
associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary
variables is also given. The applicability of the techniques presented is discussed through various examples; their use for
verification purposes is illustrated as well. | The lattice of data refinement We define a very general notion of data refinement which comprises the traditionalnotion of data refinement as a special case. Using the concepts of duals and adjoints wedefine converse commands and a find a symmetry between ordinary data refinement and adual (backward) data refinement. We show how ordinary and backward data refinementare interpreted as simulation and we derive rules for the piecewise data refinement ofprograms. Our results are valid for a general language, covering... | Logarithmical hopping encoding: a low computational complexity algorithm for image compression LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber-Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O(n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient. | Class-based n-gram models of natural language We address the problem of predicting a word from previous words in a sample of text. In particular, we discuss n-gram models based on classes of words. We also discuss several statistical algorithms for assigning words to classes based on the frequency of their co-occurrence with other words. We find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings, depending on the nature of the underlying statistics. | Reflection and semantics in LISP | Navigating hierarchically clustered networks through fisheye and full-zoom methods Many information structures are represented as two-dimensional networks (connected graphs) of links and nodes. Because these network tend to be large and quite complex, people often perfer to view part or all of the network at varying levels of detail. Hierarchical clustering provides a framework for viewing the network at different levels of detail by superimposing a hierarchy on it. Nodes are grouped into clusters, and clusters are themselves place into other clusters. Users can then navigate these clusters until an appropiate level of detail is reached. This article describes an experiment comparing two methods for viewing hierarchically clustered networks. Traditional full-zoom techniques provide details of only the current level of the hierarchy. In contrast, fisheye views, generated by the “variable-zoom” algorithm described in this article, provide information about higher levels as well. Subjects using both viewing methods were given problem-solving tasks requiring them to navigate a network, in this case, a simulated telephone system, and to reroute links in it. Results suggest that the greater context provided by fisheye views significantly improved user performance. Users were quicker to complete their task and made fewer unnecessary navigational steps through the hierarchy. This validation of fisheye views in important for designers of interfaces to complicated monitoring systems, such as control rooms for supervisory control and data acquistion systems, where efficient human performance is often critical. However, control room operators remained concerned about the size and visibility tradeoffs between the fine room operators remained concerned about the size and visibility tradeoffs between the fine detail provided by full-zoom techniques and the global context supplied by fisheye views. Specific interface feaures are required to reconcile the differences. | A Task-Based Methodology for Specifying Expert Systems A task-based specification methodology for expert system specification that is independent of the problem solving architecture, that can be applied to many expert system applications, that focuses on what the knowledge is, not how it is implemented, that introduces the major concepts involved gradually, and that supports verification and validation is discussed. To evaluate the methodology, a specification of R1/SOAR, an expert system that reimplements a major portion of the R1 expert system, was reverse engineered.<> | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.047979 | 0.028738 | 0.023612 | 0.019852 | 0.015857 | 0.01 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
New Approach to Requirements Trade-Off Analysis for Complex Systems In this paper, we propose a faceted requirement classification scheme for analyzing heterogeneous requirements. The representation of vague requirements is based on Zadeh's canonical form in test-score semantics and an extension of the notion of soft conditions. The trade-off among vague requirements is analyzed by identifying the relationship between requirements, which could be either conflicting, irrelevant, cooperative, counterbalance, or independent. Parameterized aggregation operators, fuzzy and/or, are selected to combine individual requirements. An extended hierarchical aggregation structure is proposed to establish a four-level requirements hierarchy to facilitate requirements and criticalities aggregation through the fuzzy and/or. A compromise overall requirement can be obtained through the aggregation of individual requirements based on the requirements hierarchy. The proposed approach provides a framework for formally analyzing and modeling conflicts between requirements, and for users to better understand relationships among their requirements. | A requirements and design aid for relational data bases A tool is described for defining data processing system requirements and for automatically generating data base designs from the requirements. The generated designs are specific to System R but the mapping rules are valid for the relational model in general and can be adapted to other data models as well. The requirements and design are stored in a System R data base, are cross-referenced with each other, and can be accessed and used for other purposes. The requirements are defined in terms of an organized common-sense semantic model and serve the function of the Conceptual Schema in the ANSI/SPARC three schema framework. The tool generates (synthesizes) relational designs that have no redundancy, no update anomalies, and are in 5th normal form. The requirements analysis and design generation procedures are illustrated with a case study. | Modeling imprecise requirements with fuzzy objects One of the foci of the recent development in object-oriented modeling (OOM) has been the extension of OOM to fuzzy logic to capture and analyze informal requirements that are imprecise in nature. In this paper, a new approach to object-oriented modeling based on fuzzy logic is proposed to formulate imprecise requirements along four dimensions: (1) to extend a class by grouping objects with similar properties into a fuzzy class, (2) to encapsulate fuzzy rules in a fuzzy class to describe the relationship between attributes, (3) to evaluate the membership function of a fuzzy class by considering both static and dynamic properties, and (4) to model uncertain fuzzy associations between classes. The proposed approach is illustrated using the problem domain of a meeting scheduler system. (C) 1999 Elsevier Science Inc. All rights reserved. | A theory of action for multi-agent planning A theory of action suitable for reasoning about events in multiagent or dynamically changing environments is pre- scntcrl. A device called a process model is used to represent the observable behavior of an agent in performing an ac- tion. This model is more general than previous models of act ion, allowing sequencing, selection, nondeterminism, it- eration, and parallelism to be represented. It is shown how this model can be utilized in synthesizing plans and rea- soning about concurrency. In parbicular, conditions are de- rived for determining whether or not concurrent actions are free from mutual interference. It is also indicated how this theory pro!.ides a basis for understanding and reasoning about act,ion sentences in both natural and programming lariguagcs. | A fuzzy Petri net-based expert system and its application to damage assessment of bridges In this paper, a fuzzy Petri net approach to modeling fuzzy rule-based reasoning is proposed to bring together the possibilistic entailment and the fuzzy reasoning to handle uncertain and imprecise information. The three key components in our fuzzy rule-based reasoning-fuzzy propositions, truth-qualified fuzzy rules, and truth-qualified fuzzy facts-can be formulated as fuzzy places, uncertain transitions, and uncertain fuzzy tokens, respectively. Four types of uncertain transitions-inference, aggregation, duplication, and aggregation-duplication transitions-are introduced to fulfil the mechanism of fuzzy rule-based reasoning. A framework of integrated expert systems based on our fuzzy Petri net, called fuzzy Petri net-based expert system (FPNES), is implemented in Java. Major features of FPNES include knowledge representation through the use of hierarchical fuzzy Petri nets, a reasoning mechanism based on fuzzy Petri nets, and transformation of modularized fuzzy rule bases into hierarchical fuzzy Petri nets. An application to the damage assessment of the Da-Shi bridge in Taiwan is used as an illustrative example of FPNES. | A configurable framework for method and tool integration There is an urgent need to provide a sound generic framework for method and tool integration, where many differing notations are used, software development is distributed and management support for the software development process is provided. This paper argues that there is much to be learnt from proven practical techniques for software construction, particularly those that support distributed software integration, heterogeneity and software management. Configuration Programming is one such approach which advocates the use of a separate, declarative configuration language for the description of system structure. It has been used in the Conic Environment for the development of distributable software, and is being extended for the configuration of heterogeneous components programmed in different programming languages. A number of software tools exist for the development, construction and management of Conic systems. This paper shows how an analogous set of the principles, practice and tools from configuration programming can be combined with recent work on ViewPoints1 to provide a configurable framework for method and tool integration. | Automating the Transformational Development of Software This paper reports on efforts to extend the transformational implementation (TI) model of software development [1]. In particular, we describe a system that uses AI techniques to automate major portions of a transformational implementation. The work has focused on the formalization of the goals, strategies, selection rationale, and finally the transformations used by expert human developers. A system has been constructed that includes representations for each of these problem-solving components, as well as machinery for handling human-system interaction and problem-solving control. We will present the system and illustrate automation issues through two annotated examples. | System processes are software too This talk explores the application of software engineering tools, technologies, and approaches to developing and continuously improving systems by focusing on the systems' processes. The systems addressed are those that are complex coordinations of the efforts of humans, hardware devices, and software subsystems, where humans are on the “inside”, playing critical roles in the functioning of the system and its processes. The talk suggests that in such cases, the collection of processes that use the system is tantamount to being the system itself, suggesting that improving the system's processes amounts to improving the system. Examples of systems from a variety of different domains that have been addressed and improved in this way will be presented and explored. The talk will suggest some additional untried software engineering ideas that seem promising as vehicles for supporting system development and improvement, and additional system domains that seem ripe for the application of this kind of software-based process technology. The talk will emphasize that these applications of software engineering approaches to systems has also had the desirable effect of adding to our understandings of software engineering. These understandings have created a software engineering research agenda that is complementary to, and synergistic with, agendas for applying software engineering to system development and improvement. | A taxonomy for the early stages of the software development life cycle Most researchers in the software engineering community use the term “requirements” to describe the initial stage of software development, and they define requirements to be a process of describing what , not how . However, the range of tools and techniques that are currently sold as requirements tools and techniques extends from aids for analysts asking potential customers appropriate questions about an existent problem to aids for defining algorithms for software modules. This paper presents a taxonomy of the early stages of the software development life cycle to enable prospective tool and technique users to understand what they are buying and to enable future toolsmiths and technique developers to uniquely categorize and characterize their product in comparison with others. | SA-ER: A Methodology that Links Structured Analysis and Entity-Relationship Modeling for Database Design | Building problem domain ontology from security requirements in regulatory documents Establishing secure systems assurance based on Certification and Accreditation (C&A) activities, requires effective ways to understand the enforced security requirements, gather relevant evidences, perceive related risks in the operational environment, and reveal their causal relationships with other domain concepts. However, C&A security requirements are expressed in multiple regulatory documents with complex interdependencies at different levels of abstractions that often result in subjective interpretations and non-standard implementations. Their non-functional nature imposes complex constraints on the emergent behavior of software-intensive systems, making them hard to understand, predict, and control. To address these issues, we present novel techniques from software requirements engineering and knowledge engineering for systematically extracting, modeling, and analyzing security requirements and related concepts from multiple C&A-enforced regulatory documents. We employ advanced ontological engineering processes as our primary modeling technique to represent complex and diverse characteristics of C&A security requirements and related domain knowledge. We apply our methodology to build problem domain ontology from regulatory documents enforced by the Department of Defense Information Technology Security Certification and Accreditation Process (DITSCAP). | The Depth And Width Of Local Minima In Discrete Solution Spaces Heuristic search techniques such as simulated annealing and tabu search require ''tuning'' of parameters (i.e., the cooling schedule in simulated annealing, and the tabu list length in tabu search), to achieve optimum performance. In order for a user to anticipate the best choice of parameters, thus avoiding extensive experimentation, a better understanding of the solution space of the problem to be solved is needed. Two functions of the solution space, the maximum depth and the maximum width of local minima are discussed here, and sharp bounds on the value of these functions are given for the 0-1 knapsack problem and the cardinality set covering problem. | Non-Repetitive Tilings In 1906 Axel Thue showed how to construct an innite non-repetitive (or square- free) word on an alphabet of size 3. Since then this result has been rediscovered many times and extended in many ways. We present a two-dimensional version of this result. We show how to construct a rectangular tiling of the plane using 5 symbols which has the property that lines of tiles which are horizontal, vertical or have slope +1 or 1 contain no repetitions. As part of the construction we introduce a new type of word, one that is non-repetitive up to mod k ,w hich is of interest in itself. We also indicate how our results might be extended to higher dimensions. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1.014556 | 0.012585 | 0.0125 | 0.006727 | 0.00625 | 0.003256 | 0.002238 | 0.000714 | 0.000207 | 0.00007 | 0.000008 | 0 | 0 | 0 |
Hyperspectral data compression using double sparsity model The increased spatial and spectral resolution of hyperspectral images (HSIs) leads to very high data rates which causes difficulties in storing and transmitting all acquired data. Efficient compression techniques are desirable solution to this problem. In this paper, we propose a compression method based on the double sparsity model which has great abilities in capturing distinctive characters of signals and in sparsifying signals. HSIs can be expressed as a data cube with the third dimension specified by spectral bands. Pixel vectors that carry the radiance information of substances in the corresponding resolution cell form a family of signals. First, all pixel vectors are sparsely coded using a learned sparse dictionary. The atom position indices and non-zero values are entropy coded using arithmetic codec after DPCM and uniform quantization. Experiments reveal that our approach outperformances 3D-SPIHT and JPEG2000 in rate distortion performance and in preserving spectral information. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A historical perspective of speech recognition What do we know now that we did not know 40 years ago? | On large-vocabulary speaker-independent continuous speech recognition Dans cet article, nous décrivons sphinx , le premier systéme de reconnaissance automatique de la parole continue, indépendant du locuteur et à grand vocabulaire. Nous présentons ses premiers résultats, comparons ses performances à celles d'autres systèmes semblables et expliquons sa grande précision. | Automatic speech recognition- an approach for designing inclusive games Computer games are now a part of our modern culture. However, certain categories of people are excluded from this form of entertainment and social interaction because they are unable to use the interface of the games. The reason for this can be deficits in motor control, vision or hearing. By using automatic speech recognition systems (ASR), voice driven commands can be used to control the game, which can thus open up the possibility for people with motor system difficulty to be included in game communities. This paper aims at find a standard way of using voice commands in games which uses a speech recognition system in the backend, and that can be universally applied for designing inclusive games. Present speech recognition systems however, do not support emotions, attitudes, tones etc. This is a drawback because such expressions can be vital for gaming. Taking multiple types of existing genres of games into account and analyzing their voice command requirements, a general ASRS module is proposed which can work as a common platform for designing inclusive games. A fuzzy logic controller proposed then is to enhance the system. The standard voice driven module can be based on algorithm or fuzzy controller which can be used to design software plug-ins or can be included in microchip. It then can be integrated with the game engines; creating the possibility of voice driven universal access for controlling games. | How To Present The History Of Digital Games: Enthusiast, Emancipatory, Genealogical, And Pathological Approaches This article approaches the historiography of digital games by suggesting a categorization of four different genres that can be utilized in the presentation of the history of digital games: enthusiast, emancipatory, genealogical, and pathological. All of these genres are based on various conceptions of what is important in the history of digital games and to whom the history is primarily targeted. The article also evaluates the premises of the authors of the histories. The present article's main objective is to create suggestions for a unique classification that would be especially suitable for the historiography of digital games. | We need to talk: HCI and the delicate topic of spoken language interaction Speech and natural language remain our most natural form of interaction; yet the HCI community have been very timid about focusing their attention on designing and developing spoken language interaction techniques. This may be due to a widespread perception that perfect domain-independent speech recognition is an unattainable goal. Progress is continuously being made in the engineering and science of speech and natural language processing, however, and there is also recent research that suggests that many applications of speech require far less than 100% accuracy to be useful in many contexts. Engaging the CHI community now is timely -- many recent commercial applications, especially in the mobile space, are already tapping the increased interest in and need for natural user interfaces (NUIs) by enabling speech interaction in their products. As such, the goal of this panel is to bring together interaction designers, usability researchers, and general HCI practitioners to discuss the opportunities and directions to take in designing more natural interactions based on spoken language, and to look at how we can leverage recent advances in speech processing in order to gain widespread acceptance of speech and natural language interaction. | The structured design of cryptographically good s-boxes We describe a design procedure for the s-boxes of private key cryptosystems constructed as substitution-permutation networks (DES-like cryptosystems). Our procedure is proven to construct s-boxes which are bijective, are highly nonlinear, possess the strict avalanche criterion, and have output bits which act (vitually) independently when any single input bit is complemented. Furthermore, our procedure is very efficient: we have generated approximately 60 such 4 × 4 s-boxes in a few seconds of CPU time on a SUN workstation. | An incremental ant colony optimization based approach to task assignment to processors for multiprocessor scheduling. Optimized task scheduling is one of the most important challenges to achieve high performance in multiprocessor environments such as parallel and distributed systems. Most introduced task-scheduling algorithms are based on the so-called list scheduling technique. The basic idea behind list scheduling is to prepare a sequence of nodes in the form of a list for scheduling by assigning them some priority measurements, and then repeatedly removing the node with the highest priority from the list and allocating it to the processor providing the earliest start time (EST). Therefore, it can be inferred that the makespans obtained are dominated by two major factors: (1) which order of tasks should be selected (sequence subproblem); (2) how the selected order should be assigned to the processors (assignment subproblem). A number of good approaches for overcoming the task sequence dilemma have been proposed in the literature, while the task assignment problem has not been studied much. The results of this study prove that assigning tasks to the processors using the traditional EST method is not optimum; in addition, a novel approach based on the ant colony optimization algorithm is introduced, which can find far better solutions. | Robust and Imperceptible Dual Watermarking for Telemedicine Applications In this paper, the effects of different error correction codes on the robustness and imperceptibility of discrete wavelet transform and singular value decomposition based dual watermarking scheme is investigated. Text and image watermarks are embedded into cover radiological image for their potential application in secure and compact medical data transmission. Four different error correcting codes such as Hamming, the Bose, Ray-Chaudhuri, Hocquenghem (BCH), the Reed---Solomon and hybrid error correcting (BCH and repetition code) codes are considered for encoding of text watermark in order to achieve additional robustness for sensitive text data such as patient identification code. Performance of the proposed algorithm is evaluated against number of signal processing attacks by varying the strength of watermarking and covers image modalities. The experimental results demonstrate that this algorithm provides better robustness without affecting the quality of watermarked image.This algorithm combines the advantages and removes the disadvantages of the two transform techniques. Out of the three error correcting codes tested, it has been found that Reed---Solomon shows the best performance. Further, a hybrid model of two of the error correcting codes (BCH and repetition code) is concatenated and implemented. It is found that the hybrid code achieves better results in terms of robustness. This paper provides a detailed analysis of the obtained experimental results. | Separation and information hiding We investigate proof rules for information hiding, using the recent formalism of separation logic. In essence, we use the separating conjunction to partition the internal resources of a module from those accessed by the module's clients. The use of a logical connective gives rise to a form of dynamic partitioning, where we track the transfer of ownership of portions of heap storage between program components. It also enables us to enforce separation in the presence of mutable data structures with embedded addresses that may be aliased. | Incorporating usability into requirements engineering tools The development of a computer system requires the definition of a precise set of properties or constraints that the system must satisfy with maximum economy and efficiency. This definition process requires a significant amount of communication between the requestor and the developer of the system. In recent years, several methodologies and tools have been proposed to improve this communication process. This paper establishes a framework for examining the methodologies and techniques, charting the progress made, and identifying opportunities to improve the communication capabilities of a requirements engineering tool. | Non-interference through determinism The standard approach to the specification of a secure system is to present a (usually state-based) abstract security model separately from the specification of the system's functional requirements, and establishing a correspondence between the two specifications. This complex treatment has resulted in development methods distinct from those usually advocated for general applications.We provide a novel and intellectually satisfying formulation of security properties in a process algebraic framework, and show that these are preserved under refinement. We relate the results to a more familiar state-based (Z) specification methodology. There are efficient algorithms for verifying our security properties using model checking. | Matching language and hardware for parallel computation in the Linda Machine The Linda Machine is a parallel computer that has been designed to support the Linda parallel programming environment in hardware. Programs in Linda communicate through a logically shared associative memory called tuple space. The goal of the Linda Machine project is to implement Linda's high-level shared-memory abstraction efficiently on a nonshared-memory architecture. The authors describe the machine's special-purpose communication network and its associated protocols, the design of the Linda coprocessor, and the way its interaction with the network supports global access to tuple space. The Linda Machine is in the process of fabrication. The authors discuss the machine's projected performance and compare this to software versions of Linda. | Refinement in Object-Z and CSP In this paper we explore the relationship between refinement in Object-Z and refinement in CSP. We prove with a simple counterexample that refinement within Object-Z, established using the standard simulation rules, does not imply failures-divergences refinement in CSP. This contradicts accepted results.Having established that data refinement in Object-Z and failures refinement in CSP are not equivalent we identify alternative refinement orderings that may be used to compare Object-Z classes and CSP processes. When reasoning about concurrent properties we need the strength of the failures-divergences refinement ordering and hence identify equivalent simulation rules for Object-Z. However, when reasoning about sequential properties it is sufficient to work within the simpler relational semantics of Object-Z. We discuss an alternative denotational semantics for CSP, the singleton failures semantic model, which has the same information content as the relational model of Object-Z. | Reversible data hiding by adaptive group modification on histogram of prediction errors. In this work, the conventional histogram shifting (HS) based reversible data hiding (RDH) methods are first analyzed and discussed. Then, a novel HS based RDH method is put forward by using the proposed Adaptive Group Modification (AGM) on the histogram of prediction errors. Specifically, in the proposed AGM method, multiple bins are vacated based on their magnitudes and frequencies of occurrences by employing an adaptive strategy. The design goals are to maximize hiding elements while minimizing shifting and modification elements to maintain image high quality by giving priority to the histogram bins utilized for hiding. Furthermore, instead of hiding only one bit at a time, the payload is decomposed into segments and each segment is hidden by modifying a triplet of prediction errors to suppress distortion. Experimental results show that the proposed AGM technique outperforms the current state-of-the-art HS based RDH methods. As a representative result, the proposed method achieves an improvement of 4.30 dB in terms of PSNR when 105,000 bits are hidden into the test Lenna image. | 1.22 | 0.22 | 0.22 | 0.22 | 0.076667 | 0.004 | 0.001667 | 0.000278 | 0 | 0 | 0 | 0 | 0 | 0 |
Hyperspectral Image Compression Using Hybrid Transform With Different Wavelet-Based Transform Coding Hyperspectral image resolution offers limited spectral bands within a continual spectral spectrum, creating one of the spectra of most pixels inside the sequence which contains huge volume of data. Data transmission and storage is a challenging task. Compression of hyperspectral images are inevitable. This work proposes a Hyperspectral Image (HSI) compression using Hybrid Transform. First the HSI is decomposed into 1D and it is clustered and tiled. Each cluster is applied with Integer Karhunen-Loeve Transform (IKLT) and as such it is applied for whole image to get IKLT bands in spectral dimension. Then IKLT bands are applied with Integer Wavelet Transform (IDWT) to decorrelate the spatial data in spatial dimension. The combination of IKLT and IDWT is known as Hybrid transform. Second, the decorrelated wavelet coefficients are applied to Spatial-oriention Tree Wavelet (STW), Wavelet Difference Reduction (WDR) and Adaptively Scanned Wavelet Difference Reduction (ASWDR). The experimental result shows STW algorithm using Hybrid Transform gives better PSNR (db) and bits per pixel per band (bpppb) for hyperspectral images. The comparison between STW, WDR and ASWDR with Hybrid Transform for Indian Pines, Salinas, Botswana, Botswana and KSC images is experimented. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cooperating proofs for distributed programs with multiparty interactions The paper presents a proof system for partial-correctness assertions for a language for distributed programs based on multiparty interactions as its interprocess communication and synchronization primitive. The system is a natural generalization of the cooperating proofs introduced for partial-correctness proofs of CSP programs. | Some impossibility results in interprocess synchronization In this paper we construct a formal specification of the problem of synchronizing asynchronous processes under strong fairness. We prove that strong interaction fairness is impossible for binary (and hence for multiway) interactions and strong process fairness is impossible for multiway interactions. | A distributed algorithm to implement n-party rendezvous The concept of n-party rendezvous has been proposed to implement synchronous communication among an arbitrary number of concurrent, asynchronous processes. The problem of implementing n-party rendezvous captures two central issues in the design of distributed systems: exclusion and synchronization. This paper describes a simple, distributed algorithm, referred to as the event manager algorithm, to implement n-party rendezvous. It also compares the performance of this algorithm with an existing algorithm for this problem. | A new and efficient implementation of multiprocess synchronization Without Abstract | On Fairness as an Abstraction for the Design of Distributed Systems | Action Systems with Synchronous Communication this paper show that a simple extension of the action systems framework,adding procedure declarations to action systems, will give us a very general mechanism forsynchronized communication between action systems. Both actions and procedure bodiesare guarded commands. When an action in one action system calls a procedure in anotheraction system, the eoeect is that of a remote procedure call. The calling action and theprocedure body involved in the call are executed as a single atomic... | Stepwise refinement of parallel algorithms The refinement calculus and the action system formalism are combined to provide a uniform method for constructing parallel and distributed algorithms by stepwise refinement. It is shown that the sequencial refinement calculus can be used as such for most of the derivation steps. Parallelism is introduced during the derivation by refinement of atomicity. The approach is applied to the derivation of a parallel version of the Gaussian elimination method for solving simultaneous linear equation systems. | Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement. | A lazy evaluator A different way to execute pure LISP programs is presented. It delays the evaluation of parameters and list structures without ever having to perform more evaluation steps than the usual method. Although the central idea can be found in earlier work this paper is of interest since it treats a rather well-known language and works out an algorithm which avoids full substitution. A partial correctness proof using Scott-Strachey semantics is sketched in a later section. | Qualitative probabilistic modelling in event-B Event-B is a notation and method for discrete systems modelling by refinement. We introduce a small but very useful construction: qualitative probabilistic choice. It extends the expressiveness of Event-B allowing us to prove properties of systems that could not be formalised in Event-B before. We demonstrate this by means of a small example, part of a larger Event-B development that could not be fully proved before. An important feature of the introduced construction is that it does not complicate the existing Event-B notation or method, and can be explained without referring to the underlying more complicated probabilistic theory. The necessary theory [18] itself is briefly outlined in this article to justify the soundness of the proof obligations given. We also give a short account of alternative constructions that we explored, and rejected. | A framework for expressing the relationships between multiple views in requirements specification Composite systems are generally comprised of heterogeneous components whose specifications are developed by many development participants. The requirements of such systems are invariably elicited from multiple perspectives that overlap, complement, and contradict each other. Furthermore, these requirements are generally developed and specified using multiple methods and notations, respectively. It is therefore necessary to express and check the relationships between the resultant specification fragments. We deploy multiple ViewPoints that hold partial requirements specifications, described and developed using different representation schemes and development strategies. We discuss the notion of inter-ViewPoint communication in the context of this ViewPoints framework, and propose a general model for ViewPoint interaction and integration. We elaborate on some of the requirements for expressing and enacting inter-ViewPoint relationships-the vehicles for consistency checking and inconsistency management. Finally, though we use simple fragments of the requirements specification method CORE to illustrate various components of our work, we also outline a number of larger case studies that we have used to validate our framework. Our computer-based ViewPoints support environment, The Viewer, is also briefly described. | Web Services Based Architectures to Support Dynamic Inter-organizational Business Processes Dynamic inter-organizational business processes are necessary to enable the flexible creation of partnerships in areas such as e-commerce and supply-chain-management. Although many information system architectures for the support of static inter-organizational business processes exist, such architectures are still not available for supporting dynamic inter-organizational business processes. In this paper the special requirements created by dynamic interorganizational business processes will be analyzed and the contributions of existing approaches and web services evaluated. Based on the paradigm of the composite application, an architecture designed to support dynamic interorganizational business processes has been developed and will be introduced. | Automated derivation of time bounds in uniprocessor concurrent systems The successful development of complex real-time systems depends on analysis techniques that can accurately assess the timing properties of those systems. This paper describes a technique for deriving upper and lower bounds on the time that can elapse between two given events in an execution of a concurrent software system running on a single processor under arbitrary scheduling. The technique involves generating linear inequalities expressing conditions that must be satisfied by all executions of such a system and using integer programming methods to find appropriate solutions to the inequalities. The technique does not require construction of the state space of the system and its feasibility has been demonstrated by using an extended version of the constrained expression toolset to analyze the timing properties of some concurrent systems with very large state spaces. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.018728 | 0.015 | 0.013706 | 0.012352 | 0.0075 | 0.003875 | 0.001045 | 0.000403 | 0.000114 | 0.00001 | 0 | 0 | 0 | 0 |
Grounded Conceptual Graph Models The ability to represent real-world objects is an important feature of a practical knowledge system. Most knowledge systems involve informal or ad-hoc mappings from their internal symbols to objects and concepts in their environment. This work introduces a framework for formally associating symbols to their meanings, a process we call grounding. Two kinds of grounding are discussed with respect to conceptual graphs --- active grounding, which involves actors to provide mappings to the environment, and terminological grounding, which involves actors that establish the basic elements of meaning with respect to a subject field's agreed-upon terminology. The work incorporates active knowledge systems and international terminological standards. | Improving medical protocols by formal methods. During the last decade, evidence-based medicine has given rise to an increasing number of medical practice guidelines and protocols. However, the work done on developing and distributing protocols outweighs the efforts on guaranteeing their quality. Indeed, anomalies like ambiguity and incompleteness are frequent in medical protocols. Recent efforts have tried to address the problem of protocol improvement, but they are not sufficient since they rely on informal processes and notations. Our objective is to improve the quality of medical protocols.The solution we suggest to the problem of quality improvement of protocols consists in the utilisation of formal methods. It requires the definition of an adequate protocol representation language, the development of techniques for the formal analysis of protocols described in that language and, more importantly, the evaluation of the feasibility of the approach based on the formalisation and verification of real-life medical protocols. For the first two aspects we rely on earlier work from the fields of knowledge representation and formal methods. The third aspect, i.e. the evaluation of the use of formal methods in the quality improvement of protocols, constitutes our main objective. The steps with which we have carried out this evaluation are the following: (1) take two real-life reference protocols which cover a wide variety of protocol characteristics; (2) formalise these reference protocols; (3) check the formalisation for the verification of interesting protocol properties; and (4) determine how many errors can be uncovered in this way.Our main results are: a consolidated formal language to model medical practice protocols; two protocols, each both modelled and formalised; a list of properties that medical protocols should satisfy; verification proofs for these protocols and properties; and perspectives of the potentials of this approach. Our results have been evaluated by a panel of medical experts, who judged that the problems we detected in the protocols with the help of formal methods were serious and should be avoided.We have succeeded in demonstrating the feasibility of formal methods for improving medical protocols. | On Overview of KRL, a Knowledge Representation Language | Formal Derivation of Strongly Correct Concurrent Programs. Summary A method is described for deriving concurrent programs which are consistent with the problem specifications and free from
deadlock and from starvation. The programs considered are expressed by nondeterministic repetitive selections of pairs of
synchronizing conditions and subsequent actions. An iterative, convergent calculus is developed for synthesizing the invariant
and synchronizing conditions which guarantee strong correctness. These conditions are constructed as limits of recurrences
associated with the specifications and the actions. An alternative method for deriving starvationfree programs by use of auxiliary
variables is also given. The applicability of the techniques presented is discussed through various examples; their use for
verification purposes is illustrated as well. | Simulation of hepatological models: a study in visual interactive exploration of scientific problems In many different fields of science and technology, visual expressions formed by diagrams, sketches, plots and even images are traditionally used to communicate not only data but also procedures. When these visual expressions are systematically used within a scientific community, bi-dimensional notations often develop which allow the construction of complex messages from sets of primitive icons. This paper discusses how these notations can be translated into visual languages and organized into an interactive environment designed to improve the user's ability to explore scientific problems. To facilitate this translation, the use of Conditional Attributed Rewriting Systems has been extended to visual language definition. The case of a visual language in the programming of a simulation of populations of hepatic cells is studied. A discussion is given of how such a visual language allows the construction of programs through the combination of graphical symbols which are familiar to the physician or which schematize shapes familiar to him in that they resemble structures the observes in real experiments. It is also shown how such a visual approach allows the user to focus on the solution of his problems, avoiding any request for unnecessary precision and most requests for house-keeping data during the interaction. | Object-oriented modeling and design | Reasoning Algebraically about Loops We show here how to formalize different kinds of loop constructs within the refinement calculus, and how to use this formalization to derive general loop transformation rules. The emphasis is on using algebraic methods for reasoning about equivalence and refinement of loops, rather than looking at operational ways of reasoning about loops in terms of their execution sequences. We apply the algebraic reasoning techniques to derive a collection of different loop transformation rules that have been found important in practical program derivations: merging and reordering of loops, data refinement of loops with stuttering transitions and atomicity refinement of loops. | Separation and information hiding We investigate proof rules for information hiding, using the recent formalism of separation logic. In essence, we use the separating conjunction to partition the internal resources of a module from those accessed by the module's clients. The use of a logical connective gives rise to a form of dynamic partitioning, where we track the transfer of ownership of portions of heap storage between program components. It also enables us to enforce separation in the presence of mutable data structures with embedded addresses that may be aliased. | Joining specification statements The specification statement allows us to easily express what a program statement does. This paper shows how refinement of specification statements can be directly expressed using the predicate calculus. It also shows that the specification statements interpreted as predicate transformers form a complete lattice, and that this lattice is the lattice of conjunctive predicate transformers. The join operator of this lattice is constructed as a specification statement. The join operators of two interesting sublattices of the set of specification statements are also investigated. | Hex-splines: a novel spline family for hexagonal lattices This paper proposes a new family of bivariate, nonseparable splines, called hex-splines, especially designed for hexagonal lattices. The starting point of the construction is the indicator function of the Voronoi cell, which is used to define in a natural way the first-order hex-spline. Higher order hex-splines are obtained by successive convolutions. A mathematical analysis of this new bivariate spline family is presented. In particular, we derive a closed form for a hex-spline of arbitrary order. We also discuss important properties, such as their Fourier transform and the fact they form a Riesz basis. We also highlight the approximation order. For conventional rectangular lattices, hex-splines revert to classical separable tensor-product B-splines. Finally, some prototypical applications and experimental results demonstrate the usefulness of hex-splines for handling hexagonally sampled data. | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Procedures and atomicity refinement The introduction of an early return from a (remote) procedure call can increase the degree of parallelism in a parallel or distributed algorithm modeled by an action system. We define a return statement for procedures in an action systems framework and show that it corresponds to carrying out an atomicity refinement. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.2 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
New Advancements in Zoning-Based Recognition of Handwritten Characters In handwritten character recognition, zoning is one of the most effective approaches for features extraction. When a zoning method is considered, the pattern image is subdivided into zones each one providing regional information related to a specific part of the pattern. The design of a zoning method concerns the definition of zoning topology and membership function. Both aspects have been recently investigated and new solutions have been proposed, able to increase adaptability of the zoning method to different application requirements. In this paper some of the most recent results in the field of zoning method design are presented and some valuable directions of research are highlighted. | Handwritten Digit Recognition by Multi-objective Optimization of Zoning Methods This paper addresses the use of multi-objective optimization techniques for optimal zoning design in the context of handwritten digit recognition. More precisely, the Non-dominant Sorting Genetic Algorithm II (NSGA II) has been considered for the optimization of Voronoi-based zoning methods. In this case both the number of zones and the zone position and shape are optimized in a unique genetic procedure. The experimental results point out the usefulness of multi-objective genetic algorithms for achieving effective zoning topologies for handwritten digit recognition. | Voronoi-Based Zoning Design by Multi-objective Genetic Optimization This paper presents a new approach to optimal zoning design. The approach uses a multi-objective genetic algorithm to define, in a unique process, the optimal number of zones of the zoning method along with the optimal zones, defined through Voronoi diagrams. The experimental tests, carried out in the field of handwritten digit recognition, show the superiority of new approach with respect to traditional dynamic approaches for zoning design, based on single-objective optimization techniques. | Fuzzy-Zoning-Based Classification for Handwritten Characters In zoning-based classification, a membership function defines the way a feature influences the different zones of the zoning method. This paper presents a new class of membership functions, which are called fuzzy-membership functions (FMFs), for zoning-based classification. These FMFs can be easily adapted to the specific characteristics of a classification problem in order to maximize classification performance. In this research, a real-coded genetic algorithm is presented to find, in a single optimization procedure, the optimal FMF, together with the optimal zoning described by Voronoi tessellation. The experimental results, which are carried out in the field of handwritten digit and character recognition, indicate that optimal FMF performs better than other membership functions based on abstract-level, ranked-level, and measurement-level weighting models, which can be found in the literature. | Numeral Recognition by Weighting Local Decisions This paper presents a new technique to improve thecombination of classification decisions obtained fromlocal analysis of patterns. Specifically, a geneticalgorithm is used to determine the optimal weight vectorto balance the local decisions in the combination process.The experimental results, carried out in the field ofhand-written numeral recognition, demonstrate theeffectiveness of the new technique. | Handwritten alphanumeric character recognition by the neocognitron A pattern recognition system which works with the mechanism of the neocognitron, a neural network model for deformation-invariant visual pattern recognition, is discussed. The neocognition was developed by Fukushima (1980). The system has been trained to recognize 35 handwritten alphanumeric characters. The ability to recognize deformed characters correctly depends strongly on the choice of the training pattern set. Some techniques for selecting training patterns useful for deformation-invariant recognition of a large number of characters are suggested. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | State-Based Model Checking of Event-Driven System Requirements It is demonstrated how model checking can be used to verify safety properties for event-driven systems. SCR tabular requirements describe required system behavior in a format that is intuitive, easy to read, and scalable to large systems (e.g. the software requirements for the A-7 military aircraft). Model checking of temporal logics has been established as a sound technique for verifying properties of hardware systems. An automated technique for formalizing the semiformal SCR requirements and for transforming the resultant formal specification onto a finite structure that a model checker can analyze has been developed. This technique was effective in uncovering violations of system invariants in both an automobile cruise control system and a water-level monitoring system. | The WEKA data mining software: an update More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003. | Resolving Goal Conflicts via Negotiation In non-cooperative multi-agent planning, resolution of multiple conflicting goals is the result of finding compromise solutions. Previous research has dealt with such multi-agent problems where planning goals are well-specified, subgoals can be enumerated, and the utilities associated with subgoals known. Our research extends the domain of problems to include non-cooperative multi-agent interactions where planning goals are ill-specified, subgoals cannot be enumerated, and the associated utilities are not precisely known. We provide a model of goal conflict resolution through negotiation implemented in the PERSUADER, a program that resolves labor disputes. Negotiation is performed through proposal and modification of goal relaxations. Case-Based Reasoning is integrated with the use of multi-attribute utilities to portray tradeoffs and propose novel goal relaxations and compromises. Persuasive arguments are generated and used as a mechanism to dynamically change the agents' utilities so that convergence to an acceptable compromise can be achieved. | Design and analysis of high-throughput lossless image compression engine using VLSI-oriented FELICS algorithm In this paper, the VLSI-oriented fast, efficient, lossless image compression system (FELICS) algorithm, which consists of simplified adjusted binary code and Golomb-Rice code with storage-less k parameter selection, is proposed to provide the lossless compression method for high-throughput applications. The simplified adjusted binary code reduces the number of arithmetic operation and improves processing speed. According to theoretical analysis, the storage-less k parameter selection applies a fixed k value in Golomb-Rice code to remove data dependency and extra storage for cumulation table. Besides, the color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Based on VLSI-oriented FELICS algorithm, the proposed hardware architecture features compactly regular data flow, and two-level parallelism with four-stage pipelining is adopted as the framework of the proposed architecture. The chip is fabricated in TSMC 0.13-µm 1P8M CMOS technology with Artisan cell library. Experiment results reveal that the proposed architecture presents superior performance in parallelism-efficiency and power-efficiency compared with other existing works, which characterize high-speed lossless compression. The maximum throughput can achieve 4.36 Gb/s. Regarding high definition (HD) display applications, our encoding capability can achieve a high-quality specification of full-HD 1080p at 60 Hz with complete red, green, blue color components. Furthermore, with the configuration as the multilevel parallelism, the proposed architecture can be applied to the advanced HD display specifications, which demand huge requirement of throughput. | Manipulating and documenting software structures using SHriMP views An effective approach to program understanding involves browsing, exploring, and creating views that document software structures at different levels of abstraction. While exploring the myriad of relationships in a multi-million line legacy system, one can easily loose context. One approach to alleviate this problem is to visualize these structures using fisheye techniques. This paper introduces Simple Hierarchical Multi-Perspective views (SHriMPs). The SHriMP visualization technique has been incorporated into the Rigi reverse engineering system. This greatly enhances Rigi's capabilities for documenting design patterns and architectural diagrams that span multiple levels of abstraction. The applicability and usefulness of SHriMPs is illustrated with selected program understanding tasks. | On Teaching Visual Formalisms A graduate course on visual formalisms for reactive systems emphasized using such languages for not only specification and requirements but also (and predominantly) actual execution. The course presented two programming approaches: an intra-object approach using statecharts and an interobject approach using live sequence charts. Using each approach, students built a small system of their choice and then combined the two systems. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.24 | 0.24 | 0.12 | 0.08 | 0.034286 | 0.000571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Goal-Oriented Requirements Enginering: A Roundtrip from Research to Practice The software industry is more than ever facing the challenge of delivering WYGIWYW software (What You Get Is What You Want). A well-structured document specifying adequate, complete, consistent, precise, and measurable requirements is a critical prerequisite for such software. Goals have been recognized to be among the driving forces for requirements elicitation, elaboration, organization, analysis, negotiation, documentation, and evolution. Growing experience with goal-oriented requirements engineering suggests synergistic links between research in this area and good practice. We discuss one journey along this road from influencing ideas and research results to tool developments to good practice in industrial projects. On the way, we discuss some lessons learnt, obstacles to technonogy transfer, and challenges for better requirements engineering research and practice. | Personal and Contextual Requirements Engineering A framework for requirements analysis is proposed that accounts for individual and personal goals and the effect of time and context on personal requirements. The implications of the framework on system architecture are considered as three implementation pathways: functional specifications, development of customisable features and automatic adaptation by the system. These pathways imply the need to analyse system architecture requirements. Different implementation pathways have cost-benefit implications for stakeholders, so cost-benefit analysis techniques are proposed to assess trade-offs between goals and implementation strategies. The use of the framework is illustrated with two case studies in assistive technology domains: e-mail and a personalised navigation system. | Reasoning About Alternative Requirements Options This paper elaborates on some of the fundamental contributions made by John Mylopoulos in the area of Requirements Engineering. We specifically focus on the use of goal models and their soft goals for reasoning about alternative options arising in the requirements engineering process. A personal account of John's qualitative reasoning technique for comparing alternatives is provided first. A quantitative but lightweight technique for evaluating alternative options is then presented. This technique builds on mechanisms introduced by the qualitative scheme while overcoming some problems raised by it. A meeting scheduling system is used as a running example to illustrate the main ideas. | The brave new world of design requirements: four key principles Despite its undoubted success, Requirements Engineering (RE) needs a better alignment between its research focus and its grounding in practical needs as these needs have changed significantly recently. We explore changes in the environment, targets, and the process of requirements engineering (RE) that influence the nature of fundamental RE questions. Based on these explorations we propose four key principles that underlie current requirements processes: (1) intertwining of requirements with implementation and organizational contexts, (2) dynamic evolution of requirements, (3) architectures as a critical stabilizing force, and (4) high levels of design complexity. We make recommendations to refocus RE research agenda as to meet new challenges based on the review and analysis of these four key themes. We note several managerial and practical implications. | Handling Obstacles in Goal-Oriented Requirements Engineering Requirements engineering is concerned with the elicitation of high-level goals to be achieved by the envisioned system, the refinement of such goals and their operationalization into specifications of services and constraints and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. Requirements engineering processes often result in goals, requirements, and assumptions about agent behavior that are too ideal; some of them are likely not to be satisfied from time to time in the running system due to unexpected agent behavior. The lack of anticipation of exceptional behaviors results in unrealistic, unachievable, and/or incomplete requirements. As a consequence, the software developed from those requirements will not be robust enough and will inevitably result in poor performance or failures, sometimes with critical consequences on the environment. This paper presents formal techniques for reasoning about obstacles to the satisfaction of goals, requirements, and assumptions elaborated in the requirements engineering process. A first set of techniques allows obstacles to be generated systematically from goal formulations and domain properties. A second set of techniques allows resolutions to be generated once the obstacles have been identified thereby. Our techniques are based on a temporal logic formalization of goals and domain properties; they are integrated into an existing method for goal-oriented requirements elaboration with the aim of deriving more realistic, complete, and robust requirements specifications. A key principle in this paper is to handle exceptions at requirements engineering time and at the goal level, so that more freedom is left for resolving them in a satisfactory way. The various techniques proposed are illustrated and assessed in the context of a real safety-critical system. | Four dark corners of requirements engineering Research in requirements engineering has produced an extensive body of knowledge, but there are four areas in which the foundation of the discipline seems weak or obscure. This article shines some light in the "four dark corners," exposing problems and proposing solutions. We show that all descriptions involved in requirements engineering should be descriptions of the environment. We show that certain control information is necessary for sound requirements engineering, and we explain the close association between domain knowledge and refinement of requirements. Together these conclusions explain the precise nature of requirements, specifications, and domain knowledge, as well as the precise nature of the relationships among them. They establish minimum standards for what information should be represented in a requirements language. They also make it possible to determine exactly what it means for requirements engineering to be successfully completed. Categories and Subject Descriptors: D.2.1 (Software Engineering): Requirements/Specifica- tions—methodologies | Negotiation behavior during requirement specification Negotiation is part of specification; during specification acquisition, users negotiate among themselves and with analysts. During specification design, designers negotiate among themselves and with a project leader. The author reports on work concerned with multiagent specification design. He describes how various agents, often with conflicting goals, can resolve their differences, integrate their results, and produce a unified specification. Such bargaining behavior is both ubiquitous in complex specification and unrepresented by current methods. Automated means to promote integrative behavior during specification are presented. Formal models of users' desires and resolution methods are necessary for integrative reasoning | Inferring Declarative Requirements Specifications from Operational Scenarios Scenarios are increasingly recognized as an effective means for eliciting, validating, and documenting software requirements. This paper concentrates on the use of scenarios for requirements elicitation and explores the process of inferring formal specifications of goals and requirements from scenario descriptions. Scenarios are considered here as typical examples of system usage; they are provided in terms of sequences of interaction steps between the intended software and its environment. Such scenarios are in general partial, procedural, and leave required properties about the intended system implicit. In the end such properties need to be stated in explicit, declarative terms for consistency/completeness analysis to be carried out.A formal method is proposed for supporting the process of inferring specifications of system goals and requirements inductively from interaction scenarios provided by stakeholders. The method is based on a learning algorithm that takes scenarios as examples/counterexamples and generates a set of goal specifications in temporal logic that covers all positive scenarios while excluding all negative ones.The output language in which goals and requirements are specified is the KAOS goal-based specification language. The paper also discusses how the scenario-based inference of goal specifications is integrated in the KAOS methodology for goal-based requirements engineering. In particular, the benefits of inferring declarative specifications of goals from operational scenarios are demonstrated by examples of formal analysis at the goal level, including conflict analysis, obstacle analysis, the inference of higher-level goals, and the derivation of alternative scenarios that better achieve the underlying goals. | From E-R to "A-R" - Modelling Strategic Actor Relationships for Business Process Reengineering | Programmers use slices when debugging Computer programmers break apart large programs into smaller coherent pieces. Each of these pieces: functions, subroutines, modules, or abstract datatypes, is usually a contiguous piece of program text. The experiment reported here shows that programmers also routinely break programs into one kind of coherent piece which is not coniguous. When debugging unfamiliar programs programmers use program pieces called slices which are sets of statements related by their flow of data. The statements in a slice are not necessarily textually contiguous, but may be scattered through a program. | Image morphing: a survey. Image morphing has received much atten-tion in recent years. It has proven to be a powerful tool for visual effects in film and television, enabling the fluid transfor-mation of one digital image into another. This paper surveys the growth of this field and describes recent advances in image morphing in terms of feature specification, warp generation methods, and transition control. These areas relate to the ease of use and quality of results. We describe the role of radial basis functions, thin plate splines, energy minimization, and multilev-el free-form deformations in advancing the state-of-the-art in image morphing. Recent work on a generalized framework for morphing among multiple images is de-scribed. | Analyzing User Requirements by Use Cases: A Goal-Driven Approach The purpose of requirements engineering is to elicit and evaluate necessary and valuable user needs. Current use-case approaches to requirements acquisition inadequately support use-case formalization and nonfunctional requirements. Based on industry trends and research, the authors have developed a method to structure use-case models with goals. They use a simple meeting planner system to illustrate the benefits of this new approach | Analogical retrieval in reuse-oriented requirements engineering Computational mechanisms are presented for analogical retrieval of domain knowledge as a basis for intelligent tool-based assistance for requirements engineers, A first mechanism, called the domain matcher, retrieves object system models which describe key features for new problems, A second mechanism, called the problem classifier, reasons with analogical mappings inferred by the domain matcher to detect potential incompleteness, overspecification and inconsistencies in entered facts and requirements, Both mechanisms are embedded in AIR, a toolkit that provides co-operative reuse-oriented assistance for requirements engineers. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.084444 | 0.093333 | 0.066667 | 0.046667 | 0.006927 | 0.002905 | 0.000585 | 0.000211 | 0.000091 | 0 | 0 | 0 | 0 | 0 |
Augmenting SADT to develop computer support for cooperative work Using the language-action perspective proposed by T. Winograd and F. Flores (1986), the author creates a general framework for both systems analysis and its practice. Structured analysis and design technique (SADT), a systems analysis methodology, is augmented using this framework. This work took place on the CONTRACT (commitment negotiation and tracking tool) project. The author includes the experiences of both users and systems analysts during the project, and emphasizes how to develop SADT descriptions with users to represent the richness and complexity of social interactions at work. The resulting software specification is also presented, including how it aided the work of the people who actually helped develop it | The domain theory for requirements engineering Retrieval, validation, and explanation tools are described for cooperative assistance during requirements engineering and are illustrated by a library system case study. Generic models of applications are reused as templates for modeling and critiquing requirements for new applications. The validation tools depend on a matching process which takes facts describing a new application and retrieves the appropriate generic model from the system library. The algorithms of the matcher, which implement a computational theory of analogical structure matching, are described. A theory of domain knowledge is proposed to define the semantics and composition of generic domain models in the context of requirements engineering. A modeling language and a library of models arranged in families of classes are described. The models represent the basic transaction processing or 'use case' for a class of applications. Critical difference rules are given to distinguish between families and hierarchical levels. Related work and future directions of the domain theory are discussed. | Structured Analysis (SA): A Language for Communicating Ideas Structured analysis (SA) combines blueprint-like graphic language with the nouns and verbs of any other language to provide a hierarchic, top-down, gradual exposition of detail in the form of an SA model. The things and happenings of a subject are expressed in a data decomposition and an activity decomposition, both of which employ the same graphic building block, the SA box, to represent a part of a whole. SA arrows, representing input, output, control, and mechanism, express the relation of each part to the whole. The paper describes the rationalization behind some 40 features of the SA language, and shows how they enable rigorous communication which results frorn disciplined, recursive application of the SA maxim: "Everything worth saying about anything worth saying something about must be expressed in six or fewer pieces." | On Overview of KRL, a Knowledge Representation Language | Implementing Remote procedure calls Remote procedure calls (RPC) are a useful paradigm for providing communication across a network between programs written in a high level language. This paper describes a package, written as part of the Cedar project, providing a remote procedure call facility. The paper describes the options that face a designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients.
Our primary aim in building an RPC package was to make the building of distributed systems easier. Previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. We hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. To achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls. | Alloy: a lightweight object modelling notation Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies. | Semantic grammar: an engineering technique for constructing natural language understanding systems One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine. | Recursive functions of symbolic expressions and their computation by machine, Part I this paper in L a TEXpartly supported by ARPA (ONR) grant N00014-94-1-0775to Stanford University where John McCarthy has been since 1962. Copied with minor notationalchanges from CACM, April 1960. If you want the exact typography, look there. Currentaddress, John McCarthy, Computer Science Department, Stanford, CA 94305, (email:[email protected]), (URL: <a href="http://citeseer.ist.psu.edu/rd/0/http%3AqSqqSqwww-formal.stanford.eduqSqjmcqSq" onmouseover="self.status="http://www-formal.stanford.edu/jmc/"; return true" onmouseout="self.status=""; return true">http://www-formal.stanford.edu/jmc/</a> )by starting with the class of expressions called S-expressions and the functionscalled... | A study of cross-validation and bootstrap for accuracy estimation and model selection We review accuracy estimation methods and compare the two most common methods crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical re cults in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment--over half a million runs of C4.5 and a Naive-Bayes algorithm--to estimate the effects of different parameters on these algrithms on real-world datasets. For crossvalidation we vary the number of folds and whether the folds are stratified or not, for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, The best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds. | A Theory of Prioritizing Composition An operator for the composition of two processes, where one process has priority over the other process, is studied. Processes are described by action systems, and data refinement is used for transforming processes. The operator is shown to be compositional, i.e. monotonic with respect to refinement. It is argued that this operator is adequate for modelling priorities as found in programming languages and operating systems. Rules for introducing priorities and for raising and lowering priorities of processes are given. Dynamic priorities are modelled with special priority variables which can be freely mixed with other variables and the prioritising operator in program development. A number of applications show the use of prioritising composition for modelling and specification in general. | Repository support for multi-perspective requirements engineering Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase ‡ ‡ ConceptBase is available through web site http://www-i5.Informatik.RWTH-Aachen.de/Cbdor/index.html. meta database management system and has been applied in a number of real-world requirements engineering projects. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Analysis-Driven Lossy Compression of DNA Microarray Images. DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yie... | 1.2 | 0.023529 | 0.002353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Agricultural leaf blight disease segmentation using indices based histogram intensity segmentation approach Grouping of pixels based on certain kind of similarity or discontinuity among the pixel called Segmentation. Segmentation of ROI from the given input image determines the success of analysis. Validity metrics helps to measure the similarity of the segmented image result. Most important and required for human survival is food. In that scenario Agriculture industry plays a vital role and the industry faces lose because of certain reasons. One of the reason to yield lose is unaware of disease diagnosis and most of the time farmer can predict disease at last moment. By implementing technological improvement in agriculture industry try to improve the crops lose and that results increasing farmer income. Indices based intensity histogram segmentation technique used to segment the disease affected part from unhealthy leaf with better accuracy rate. Segmentation is important stage in image processing technique and it helps to diagnose the diseased region. After categorizing the disease affected area it is most important to validate the segmented image. Validation algorithms are used to validate the segmented part and most famous similarity measures are Dice index measure, over lab coefficient measure, Jaccard coefficient measure, Cosine measure, Asymmetric measure, Dissimilarity measures etc. The introduced method successfully segments the affected region with 98.025% accuracy also the segmented region have 0.964% of mutual information. | On Overview of KRL, a Knowledge Representation Language | Nausicaä and the Sirens: A Tale of Two Intelligent Autonomous Agents Nausicaä and the sirens, mythological characters from Homer's Odyssey, have totally different characters. Nausicaä, an intelligent and modest princess, helps Odysseus on his journey to Alcinoüs's city. The Sirens, however, are sea creatures who use their beautiful voices to lure mariners onto the rocks surrounding their island. These characters gave me inspiration on how to design and deploy agents for real-world tasks. | The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to implement efficiently the important fetch-and-add synchronization primitive. We outine the hardware that would be required to build a 4096 processor system using 1990's technology. We also discuss system software issues, and present analytic studies of the network performance. Finally, we include a sample of our effort to implement and simulate parallel variants of important scientific p`rograms. | Model checking In computer system design, we distinguish between closed and open systems. A closed systemis a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an o ngoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems(mod- ule checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and is 2EXPTIME-complete for speci fications in CTL . This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems. | Inheritance and synchronization with enabled-sets We discuss several issues related to the integration of inheritance and concurrency in an object-oriented language to support fine-grain parallel algorithms. We present a reflective extension of the actor model to implement inheritance mechanisms within the actor model. We demonstrate that a particularly expressive and inheritable synchronization mechanism must support local reasoning, be composable, be first-class, and allow parameterization based on message content. We present such a mechanism based on the concept of enabled-sets, and illustrate each property. We have implemented enabled-sets in the Rosette prototyping testbed. | Data refinement of predicate transformers Data refinement is the systematic substitution of one data type for another in a program. Usually, the new data type is more efficient than the old, but also more complex; the purpose of data refinement in that case is to make progress in a program design from more abstract to more concrete formulations. A particularly simple definition of data refinement is possible when programs are taken to be predicate transformers in the sense of Dijkstra. Central to the definition is a function taking abstract predicates to concrete ones, and that function, a generalisation of the abstraction function, therefore is a predicate transformer as well. Advantages of the approach are: proofs about data refinement are simplified; more general techniques of data refinement are suggested; and a style of program development is encouraged in which data refinements are calculated directly without proof obligation. | Using a Process Algebra to Control B Operations The B-Method is a state-based formal method that describes system behaviourin terms of MACHINES whose state changes under OPERATIONS.The process algebra CSP is an event-based formalism that enablesdescriptions of patterns of system behaviour. This paper is concerned withthe combination of these complementary views, in which CSP is used to describethe control executive for a B Abstract System. We discuss consistencybetween the two views and how it can be formally established. A typical... | Freefinement Freefinement is an algorithm that constructs a sound refinement calculus from a verification system under certain conditions. In this paper, a verification system is any formal system for establishing whether an inductively defined term, typically a program, satisfies a specification. Examples of verification systems include Hoare logics and type systems. Freefinement first extends the term language to include specification terms, and builds a verification system for the extended language that is a sound and conservative extension of the original system. The extended system is then transformed into a sound refinement calculus. The resulting refinement calculus can interoperate closely with the verification system - it is even possible to reuse and translate proofs between them. Freefinement gives a semantics to refinement at an abstract level: it associates each term of the extended language with a set of terms from the original language, and refinement simply reduces this set. The paper applies freefinement to a simple type system for the lambda calculus and also to a Hoare logic. | 3-D transformations of images in scanline order | Integration of Statecharts View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema.In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods. | Solving zero-one mixed integer programming problems using tabu search We describe a tabu search (TS) approach for solving general zero-one mixed integer programming (MIP) problems that exploits the extreme point property of zero-one solutions. Specialized choice rules and aspiration criteria are identified for the problems, expressed as functions of integer infeasibility measures and objective function values. The first-level TS mechanisms are then extended with advanced level strategies and learning. We also look at probabilistic measures in this framework, and examine how the learning tool Target Analysis (TA) can be applied to identify better control structures and decision rules. Computational results are reported on a portfolio of multiconstraint knapsack problems. Our approach is designed to solve thoroughly general 0/1 MIP problems and thus contains no problem domain specific knowledge, yet it obtains solutions for the multiconstraint knapsack problem whose quality rivals, and in some cases surpasses, the best solutions obtained by special purpose methods that have been created to exploit the special structure of these problems. | Power Aware System Refinement We propose a formal, power aware refinement of systems. The proposed approach lays its foundation to the traditional refinement calculus of Action Systems and its direct extension, time wise refinement method. The adaptation provides well-founded mathematical basis for the systems modeled with the Timed Action Systems formalism. In the refinement of an abstract system into more concrete one a designer must that show conditions of both functional and temporal properties, and furthermore, power related issues are satisfied. | Extending statecharts to model system interactions Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communication. However, when statecharts are considered to support the modeling of system interactions, e.g., in Systems of Systems (SoS), they lack the notions of multiplicity (of systems), and interactions and parallelism (among systems). | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Secure Medical Data Transmission Model for IoT-Based Healthcare Systems. Due to the significant advancement of the Internet of Things (IoT) in the healthcare sector, the security, and the integrity of the medical data became big challenges for healthcare services applications. This paper proposes a hybrid security model for securing the diagnostic text data in medical images. The proposed model is developed through integrating either 2-D discrete wavelet transform 1 level (2D-DWT-1L) or 2-D discrete wavelet transform 2 level (2D-DWT-2L) steganography technique with a proposed hybrid encryption scheme. The proposed hybrid encryption schema is built using a combination of Advanced Encryption Standard, and Rivest, Shamir, and Adleman algorithms. The proposed model starts by encrypting the secret data; then it hides the result in a cover image using 2D-DWT-1L or 2D-DWT-2L. Both color and gray-scale images are used as cover images to conceal different text sizes. The performance of the proposed system was evaluated based on six statistical parameters; the peak signal-to-noise ratio (PSNR), mean square error (MSE), bit error rate (BER), structural similarity (SSIM), structural content (SC), and correlation. The PSNR values were relatively varied from 50.59 to 57.44 in case of color images and from 50.52 to 56.09 with the gray scale images. The MSE values varied from 0.12 to 0.57 for the color images and from 0.14 to 0.57 for the gray scale images. The BER values were zero for both images, while SSIM, SC, and correlation values were ones for both images. Compared with the state-of-the-art methods, the proposed model proved its ability to hide the confidential patient's data into a transmitted cover image with high imperceptibility, capacity, and minimal deterioration in the received stego-image. | A secure fragile watermarking scheme based on chaos-and-hamming code In this work, a secure fragile watermarking scheme is proposed. Images are protected and any modification to an image is detected using a novel hybrid scheme combining a two-pass logistic map with Hamming code. For security purposes, the two-pass logistic map scheme contains a private key to resist the vector quantization (VQ) attacks even though the embedding scheme is block independent. To ensure image integrity, watermarks are embedded into the to-be-protected images which are generated using Hamming code technique. Experimental results show that the proposed scheme has satisfactory protection ability and can detect and locate various malicious tampering via image insertion, erasing, burring, sharpening, contrast modification, and even though burst bits. Additionally, experiments prove that the proposed scheme successfully resists VQ attacks. | Image encryption using the two-dimensional logistic chaotic map Chaos maps and chaotic systems have been proved to be useful and effective for cryptography. In our study, the two-dimensional logistic map with complicated basin structures and attractors are first used for image encryption. The proposed method adopts the classic framework of the permutation-substitution network in cryptography and thus ensures both confusion and diffusion properties for a secure cipher. The proposed method is able to encrypt an intelligible image into a random-like one from the statistical point of view and the human visual system point of view. Extensive simulation results using test images from the USC-SIPI image database demonstrate the effectiveness and robustness of the proposed method. Security analysis results of using both the conventional and the most recent tests show that the encryption quality of the proposed method reaches or excels the current state-of-the-art methods. Similar encryption ideas can be applied to digital data in other formats (e.g., digital audio and video). We also publish the cipher MATLAB open-source-code under the web page https://sites.google.com/site/tuftsyuewu/source-code. (c) 2012 SPIE and IS&T. [DOI: 10.1117/1.JEI.21.1.013014] | CRT-based fragile self-recovery watermarking scheme for image authentication and recovery Fragile watermarking is one of the effective techniques for authentication of digital documents and images. However, recovering the content of the tampered region in a watermarked image is a challenging task while considering conflicting criteria of imperceptibility and watermark embedding capacity. In this paper we propose a Chinese remainder theorem (CRT)-based watermarking scheme which can recover the original contents in the tampered region of the digital content while maintaining imperceptibility criterion. High peak signal to noise ratio (PSNR) and large watermark capacity can be achieved by using the CRT-based embedding scheme. Since only modular operations are involved in computation of the CRT-based technique, it provides computational advantage as it involves only modular arithmetic. Besides, CRT-based technique introduces additional security to the watermarking scheme. By taking several digital images, we have shown that the proposed technique can recover the tampered contents effectively. We have also considered forgery detection on a digital cheque, eCheque, and shown that the proposed technique can detect and recover the original content from the forged cheque. | An adjustable-purpose image watermarking technique by particle swarm optimization. Imperceptibility, security, capacity, and robustness are among many aspects of image watermarking design. An ideal watermarking system should embed a large amount of information perfectly securely, but with no visible degradation to the host image. Many researchers have geared efforts towards developing specific techniques for variant applications. In this paper, we propose an adjustable-purpose, reversible and fragile watermarking scheme for image watermarking by particle swarm optimization (PSO). In general, given any host image and watermark, our scheme can provide an optimal watermarking solution. First, the content of a host image is analyzed to extract significant regions of interest (ROIs) automatically. The remaining regions of non-interest (RONIs) are collated for embedding watermarks by different amounts of bits determined by PSO to achieve optimal watermarking. The parameters can be adjusted relying upon user’s watermarking purposes. Experimental results show that the proposed technique has accomplished higher capacity and higher PSNR (peak signal-to-noise ratio) watermarking. | Digital Watermarking in Telemedicine Applications - Towards Enhanced Data Security and Accessibility Implementing telemedical solutions has become a trend amongst the various research teams at an international level. Yet, contemporary information access and distribution technologies raise critical issues that urgently need to be addressed, especially those related to security. The paper suggests the use of watermarking in telemedical applications in order to enhance security of the transmitted sensitive medical data, familiarizes the users with a telemedical system and a watermarking module that have already been developed, and proposes an architecture that will enable the integration of the two systems, taking into account a variety of use cases and application scenarios. | Effectiveness of virtual reality-based instruction on students' learning outcomes in K-12 and higher education: A meta-analysis The purpose of this meta-analysis is to examine overall effect as well as the impact of selected instructional design principles in the context of virtual reality technology-based instruction (i.e. games, simulation, virtual worlds) in K-12 or higher education settings. A total of 13 studies (N = 3081) in the category of games, 29 studies (N = 2553) in the category of games, and 27 studies (N = 2798) in the category of virtual worlds were meta-analyzed. The key inclusion criteria were that the study came from K-12 or higher education settings, used experimental or quasi-experimental research designs, and used a learning outcome measure to evaluate the effects of the virtual reality-based instruction. Results suggest games (FEM = 0.77; REM = 0.51), simulations (FEM = 0.38; REM = 0.41), and virtual worlds (FEM = 0.36; REM = 0.41) were effective in improving learning outcome gains. The homogeneity analysis of the effect sizes was statistically significant, indicating that the studies were different from each other. Therefore, we conducted moderator analysis using 13 variables used to code the studies. Key findings included that: games show higher learning gains than simulations and virtual worlds. For simulation studies, elaborate explanation type feedback is more suitable for declarative tasks whereas knowledge of correct response is more appropriate for procedural tasks. Students performance is enhanced when they conduct the game play individually than in a group. In addition, we found an inverse relationship between number of treatment sessions learning gains for games. With regards to the virtual world, we found that if students were repeatedly measured it deteriorates their learning outcome gains. We discuss results to highlight the importance of considering instructional design principles when designing virtual reality-based instruction. | Automatic determination of grain size for efficient parallel processing The authors propose a method for automatic determination and scheduling of modules from a sequential program. | Specifying software requirements for complex systems: new techniques and their application This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach. | Conjunction as composition Partial specifications written in many different specification languages can be composed if they are all given semantics in the same domain, or alternatively, all translated into a common style of predicate logic. The common semantic domain must be very general, the particular semantics assigned to each specification language must be conducive to composition, and there must be some means of communication that enables specifications to build on one another. The criteria for success are that a wide variety of specification languages should be accommodated, there should be no restrictions on where boundaries between languages can be placed, and intuitive expectations of the specifier should be met. | An application analyzer An interactive tool, aimed at supporting the application user/analyst in specifying and analyzing a business area, is presented. The features of the tool, named the Application Analyzer/Experimental, are described both in their theoretical foundations and their actual implementation. A brief description of the architecture of the tool and its internal Structure is given. A review of the main concepts of the application development area is also included. The follow-on of the prototype described here is the program offering known as System A. | An introduction to assertional reasoning for concurrent systems This is a tutorial introduction to assertional reasoning based on temporal logic. The objective is to provide a working familiarity with the technique. We use a simple system model and a simple proof system, and we keep to a minimum the treatment of issues such as soundness, completeness, compositionality, and abstraction. We model a concurrent system by a state transition system and fairness requirements. We reason about such systems using Hoare logic and a subset of linear-time temporal logic, specifically, invariant assertions and leads-to assertions. We apply the method to several examples. | On confusion between requirements and their representations Requirements representations are often confused with requirements. This confusion is not just widespread in practice, but it exists even in the latest requirements engineering research and theory, leading to a number of negative consequences. In this article, we discuss these negative consequences, and present a solution based on a strict distinction between requirements per se and requirements representations. We elaborate on this distinction and classify different forms of representations in a unified requirements representations ontology, including a refinement of descriptive and model-based requirements representations. | Hyperspectral image compression based on lapped transform and Tucker decomposition In this paper, we present a hyperspectral image compression system based on the lapped transform and Tucker decomposition (LT-TD). In the proposed method, each band of a hyperspectral image is first decorrelated by a lapped transform. The transformed coefficients of different frequencies are rearranged into three-dimensional (3D) wavelet sub-band structures. The 3D sub-bands are viewed as third-order tensors. Then they are decomposed by Tucker decomposition into a core tensor and three factor matrices. The core tensor preserves most of the energy of the original tensor, and it is encoded using a bit-plane coding algorithm into bit-streams. Comparison experiments have been performed and provided, as well as an analysis regarding the contributing factors for the compression performance, such as the rank of the core tensor and quantization of the factor matrices. HighlightsWe design a hyperspectral image compression using lapped transform and Tucker decomposition.Each band of a hyperspectral image is decorrelated by a lapped transform.Transformed coefficients of various frequencies are rearranged in 3Dwavelet subband structures.3D subbands are viewed as third-order tensors, decomposed by Tucker decomposition.The core tensor is encoded using a bit-plane coding algorithm into bit-streams. | 1.040133 | 0.040667 | 0.040667 | 0.040667 | 0.040667 | 0.020333 | 0.000333 | 0.000028 | 0 | 0 | 0 | 0 | 0 | 0 |