corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-601 | 0706.3459 | NP by means of lifts and shadows | <|reference_start|>NP by means of lifts and shadows: We show that every NP problem is polynomially equivalent to a simple combinatorial problem: the membership problem for a special class of digraphs. These classes are defined by means of shadows (projections) and by finitely many forbidden colored (lifted) subgraphs. Our characterization is motivated by the analysis of syntactical subclasses with the full computational power of NP, which were first studied by Feder and Vardi. Our approach applies to many combinatorial problems and it induces the characterization of coloring problems (CSP) defined by means of shadows. This turns out to be related to homomorphism dualities. We prove that a class of digraphs (relational structures) defined by finitely many forbidden colored subgraphs (i.e. lifted substructures) is a CSP class if and only if all the the forbidden structures are homomorphically equivalent to trees. We show a surprising richness of coloring problems when restricted to most frequent graph classes. Using results of Ne\v{s}et\v{r}il and Ossona de Mendez for bounded expansion classes (which include bounded degree and proper minor closed classes) we prove that the restriction of every class defined as the shadow of finitely many colored subgraphs equals to the restriction of a coloring (CSP) class.<|reference_end|> | arxiv | @article{kun2007np,
title={NP by means of lifts and shadows},
author={Gabor Kun, Jaroslav Nesetril},
journal={arXiv preprint arXiv:0706.3459},
year={2007},
archivePrefix={arXiv},
eprint={0706.3459},
primaryClass={cs.CC math.CO}
} | kun2007np |
arxiv-602 | 0706.3479 | Hypocomputation | <|reference_start|>Hypocomputation: Hypercomputational formal theories will, clearly, be both structurally and foundationally different from the formal theories underpinning computational theories. However, many of the maps that might guide us into this strange realm have been lost. So little work has been done recently in the area of metamathematics, and so many of the previous results have been folded into other theories, that we are in danger of loosing an appreciation of the broader structure of formal theories. As an aid to those looking to develop hypercomputational theories, we will briefly survey the known landmarks both inside and outside the borders of computational theory. We will not focus in this paper on why the structure of formal theory looks the way it does. Instead we will focus on what this structure looks like, moving from hypocomputational, through traditional computational theories, and then beyond to hypercomputational theories.<|reference_end|> | arxiv | @article{love2007hypocomputation,
title={Hypocomputation},
author={David Love},
journal={arXiv preprint arXiv:0706.3479},
year={2007},
archivePrefix={arXiv},
eprint={0706.3479},
primaryClass={cs.OH}
} | love2007hypocomputation |
arxiv-603 | 0706.3480 | Tight Bounds on the Average Length, Entropy, and Redundancy of Anti-Uniform Huffman Codes | <|reference_start|>Tight Bounds on the Average Length, Entropy, and Redundancy of Anti-Uniform Huffman Codes: In this paper we consider the class of anti-uniform Huffman codes and derive tight lower and upper bounds on the average length, entropy, and redundancy of such codes in terms of the alphabet size of the source. The Fibonacci distributions are introduced which play a fundamental role in AUH codes. It is shown that such distributions maximize the average length and the entropy of the code for a given alphabet size. Another previously known bound on the entropy for given average length follows immediately from our results.<|reference_end|> | arxiv | @article{mohajer2007tight,
title={Tight Bounds on the Average Length, Entropy, and Redundancy of
Anti-Uniform Huffman Codes},
author={Soheil Mohajer and Ali Kakhbod},
journal={IET Communications, vol. 5, no. 9, pp. 1213-1219, 2011},
year={2007},
archivePrefix={arXiv},
eprint={0706.3480},
primaryClass={cs.IT math.IT}
} | mohajer2007tight |
arxiv-604 | 0706.3502 | Approximately-Universal Space-Time Codes for the Parallel, Multi-Block and Cooperative-Dynamic-Decode-and-Forward Channels | <|reference_start|>Approximately-Universal Space-Time Codes for the Parallel, Multi-Block and Cooperative-Dynamic-Decode-and-Forward Channels: Explicit codes are constructed that achieve the diversity-multiplexing gain tradeoff of the cooperative-relay channel under the dynamic decode-and-forward protocol for any network size and for all numbers of transmit and receive antennas at the relays. A particularly simple code construction that makes use of the Alamouti code as a basic building block is provided for the single relay case. Along the way, we prove that space-time codes previously constructed in the literature for the block-fading and parallel channels are approximately universal, i.e., they achieve the DMT for any fading distribution. It is shown how approximate universality of these codes leads to the first DMT-optimum code construction for the general, MIMO-OFDM channel.<|reference_end|> | arxiv | @article{elia2007approximately-universal,
title={Approximately-Universal Space-Time Codes for the Parallel, Multi-Block
and Cooperative-Dynamic-Decode-and-Forward Channels},
author={Petros Elia and P. Vijay Kumar},
journal={arXiv preprint arXiv:0706.3502},
year={2007},
archivePrefix={arXiv},
eprint={0706.3502},
primaryClass={cs.IT cs.DM cs.NI math.IT}
} | elia2007approximately-universal |
arxiv-605 | 0706.3523 | There Exist some Omega-Powers of Any Borel Rank | <|reference_start|>There Exist some Omega-Powers of Any Borel Rank: Omega-powers of finitary languages are languages of infinite words (omega-languages) in the form V^omega, where V is a finitary language over a finite alphabet X. They appear very naturally in the characterizaton of regular or context-free omega-languages. Since the set of infinite words over a finite alphabet X can be equipped with the usual Cantor topology, the question of the topological complexity of omega-powers of finitary languages naturally arises and has been posed by Niwinski (1990), Simonnet (1992) and Staiger (1997). It has been recently proved that for each integer n > 0, there exist some omega-powers of context free languages which are Pi^0_n-complete Borel sets, that there exists a context free language L such that L^omega is analytic but not Borel, and that there exists a finitary language V such that V^omega is a Borel set of infinite rank. But it was still unknown which could be the possible infinite Borel ranks of omega-powers. We fill this gap here, proving the following very surprising result which shows that omega-powers exhibit a great topological complexity: for each non-null countable ordinal alpha, there exist some Sigma^0_alpha-complete omega-powers, and some Pi^0_alpha-complete omega-powers.<|reference_end|> | arxiv | @article{lecomte2007there,
title={There Exist some Omega-Powers of Any Borel Rank},
author={Dominique Lecomte (UMR 7586), Olivier Finkel (LIP)},
journal={Dans Proceedings of the 16th EACSL Annual Conference on Computer
Science and Logic, CSL 2007, - 16th EACSL Annual Conference on Computer
Science and Logic, CSL 2007, September 11-15, 2007, Lausanne : Suisse},
year={2007},
archivePrefix={arXiv},
eprint={0706.3523},
primaryClass={cs.LO cs.CC math.LO}
} | lecomte2007there |
arxiv-606 | 0706.3546 | stdchk: A Checkpoint Storage System for Desktop Grid Computing | <|reference_start|>stdchk: A Checkpoint Storage System for Desktop Grid Computing: Checkpointing is an indispensable technique to provide fault tolerance for long-running high-throughput applications like those running on desktop grids. This paper argues that a dedicated checkpoint storage system, optimized to operate in these environments, can offer multiple benefits: reduce the load on a traditional file system, offer high-performance through specialization, and, finally, optimize data management by taking into account checkpoint application semantics. Such a storage system can present a unifying abstraction to checkpoint operations, while hiding the fact that there are no dedicated resources to store the checkpoint data. We prototype stdchk, a checkpoint storage system that uses scavenged disk space from participating desktops to build a low-cost storage system, offering a traditional file system interface for easy integration with applications. This paper presents the stdchk architecture, key performance optimizations, support for incremental checkpointing, and increased data availability. Our evaluation confirms that the stdchk approach is viable in a desktop grid setting and offers a low cost storage system with desirable performance characteristics: high write throughput and reduced storage space and network effort to save checkpoint images.<|reference_end|> | arxiv | @article{kiswany2007stdchk:,
title={stdchk: A Checkpoint Storage System for Desktop Grid Computing},
author={Samer Al Kiswany, Matei Ripeanu, Sudharshan S. Vazhkudai, Abdullah
Gharaibeh},
journal={arXiv preprint arXiv:0706.3546},
year={2007},
archivePrefix={arXiv},
eprint={0706.3546},
primaryClass={cs.DC}
} | kiswany2007stdchk: |
arxiv-607 | 0706.3565 | Experimental Algorithm for the Maximum Independent Set Problem | <|reference_start|>Experimental Algorithm for the Maximum Independent Set Problem: We develop an experimental algorithm for the exact solving of the maximum independent set problem. The algorithm consecutively finds the maximal independent sets of vertices in an arbitrary undirected graph such that the next such set contains more elements than the preceding one. For this purpose, we use a technique, developed by Ford and Fulkerson for the finite partially ordered sets, in particular, their method for partition of a poset into the minimum number of chains with finding the maximum antichain. In the process of solving, a special digraph is constructed, and a conjecture is formulated concerning properties of such digraph. This allows to offer of the solution algorithm. Its theoretical estimation of running time equals to is $O(n^{8})$, where $n$ is the number of graph vertices. The offered algorithm was tested by a program on random graphs. The testing the confirms correctness of the algorithm.<|reference_end|> | arxiv | @article{plotnikov2007experimental,
title={Experimental Algorithm for the Maximum Independent Set Problem},
author={Anatoly D. Plotnikov},
journal={Cybernetics and Systems Analysis: Volume 48, Issue 5 (2012), Page
673-680},
year={2007},
archivePrefix={arXiv},
eprint={0706.3565},
primaryClass={cs.DS}
} | plotnikov2007experimental |
arxiv-608 | 0706.3639 | A Collection of Definitions of Intelligence | <|reference_start|>A Collection of Definitions of Intelligence: This paper is a survey of a large number of informal definitions of ``intelligence'' that the authors have collected over the years. Naturally, compiling a complete list would be impossible as many definitions of intelligence are buried deep inside articles and books. Nevertheless, the 70-odd definitions presented here are, to the authors' knowledge, the largest and most well referenced collection there is.<|reference_end|> | arxiv | @article{legg2007a,
title={A Collection of Definitions of Intelligence},
author={Shane Legg and Marcus Hutter},
journal={Frontiers in Artificial Intelligence and Applications, Vol.157
(2007) 17-24},
year={2007},
number={IDSIA-07-07},
archivePrefix={arXiv},
eprint={0706.3639},
primaryClass={cs.AI}
} | legg2007a |
arxiv-609 | 0706.3679 | Scale-sensitive Psi-dimensions: the Capacity Measures for Classifiers Taking Values in R^Q | <|reference_start|>Scale-sensitive Psi-dimensions: the Capacity Measures for Classifiers Taking Values in R^Q: Bounds on the risk play a crucial role in statistical learning theory. They usually involve as capacity measure of the model studied the VC dimension or one of its extensions. In classification, such "VC dimensions" exist for models taking values in {0, 1}, {1,..., Q} and R. We introduce the generalizations appropriate for the missing case, the one of models with values in R^Q. This provides us with a new guaranteed risk for M-SVMs which appears superior to the existing one.<|reference_end|> | arxiv | @article{guermeur2007scale-sensitive,
title={Scale-sensitive Psi-dimensions: the Capacity Measures for Classifiers
Taking Values in R^Q},
author={Yann Guermeur (LORIA)},
journal={ASMDA 2007 (2007) 1-8},
year={2007},
archivePrefix={arXiv},
eprint={0706.3679},
primaryClass={cs.LG}
} | guermeur2007scale-sensitive |
arxiv-610 | 0706.3710 | Optimal Constellations for the Low SNR Noncoherent MIMO Block Rayleigh Fading Channel | <|reference_start|>Optimal Constellations for the Low SNR Noncoherent MIMO Block Rayleigh Fading Channel: Reliable communication over the discrete-input/continuous-output noncoherent multiple-input multiple-output (MIMO) Rayleigh block fading channel is considered when the signal-to-noise ratio (SNR) per degree of freedom is low. Two key problems are posed and solved to obtain the optimum discrete input. In both problems, the average and peak power per space-time slot of the input constellation are constrained. In the first one, the peak power to average power ratio (PPAPR) of the input constellation is held fixed, while in the second problem, the peak power is fixed independently of the average power. In the first PPAPR-constrained problem, the mutual information, which grows as O(SNR^2), is maximized up to second order in SNR. In the second peak-constrained problem, where the mutual information behaves as O(SNR), the structure of constellations that are optimal up to first order, or equivalently, that minimize energy/bit, are explicitly characterized. Furthermore, among constellations that are first-order optimal, those that maximize the mutual information up to second order, or equivalently, the wideband slope, are characterized. In both PPAPR-constrained and peak-constrained problems, the optimal constellations are obtained in closed-form as solutions to non-convex optimizations, and interestingly, they are found to be identical. Due to its special structure, the common solution is referred to as Space Time Orthogonal Rank one Modulation, or STORM. In both problems, it is seen that STORM provides a sharp characterization of the behavior of noncoherent MIMO capacity.<|reference_end|> | arxiv | @article{srinivasan2007optimal,
title={Optimal Constellations for the Low SNR Noncoherent MIMO Block Rayleigh
Fading Channel},
author={Shivratna Giri Srinivasan and Mahesh K. Varanasi},
journal={arXiv preprint arXiv:0706.3710},
year={2007},
archivePrefix={arXiv},
eprint={0706.3710},
primaryClass={cs.IT math.IT}
} | srinivasan2007optimal |
arxiv-611 | 0706.3723 | Order-Invariant MSO is Stronger than Counting MSO in the Finite | <|reference_start|>Order-Invariant MSO is Stronger than Counting MSO in the Finite: We compare the expressiveness of two extensions of monadic second-order logic (MSO) over the class of finite structures. The first, counting monadic second-order logic (CMSO), extends MSO with first-order modulo-counting quantifiers, allowing the expression of queries like ``the number of elements in the structure is even''. The second extension allows the use of an additional binary predicate, not contained in the signature of the queried structure, that must be interpreted as an arbitrary linear order on its universe, obtaining order-invariant MSO. While it is straightforward that every CMSO formula can be translated into an equivalent order-invariant MSO formula, the converse had not yet been settled. Courcelle showed that for restricted classes of structures both order-invariant MSO and CMSO are equally expressive, but conjectured that, in general, order-invariant MSO is stronger than CMSO. We affirm this conjecture by presenting a class of structures that is order-invariantly definable in MSO but not definable in CMSO.<|reference_end|> | arxiv | @article{ganzow2007order-invariant,
title={Order-Invariant MSO is Stronger than Counting MSO in the Finite},
author={Tobias Ganzow and Sasha Rubin},
journal={Dans Proceedings of the 25th Annual Symposium on the Theoretical
Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)},
year={2007},
archivePrefix={arXiv},
eprint={0706.3723},
primaryClass={cs.LO}
} | ganzow2007order-invariant |
arxiv-612 | 0706.3750 | Pruning Processes and a New Characterization of Convex Geometries | <|reference_start|>Pruning Processes and a New Characterization of Convex Geometries: We provide a new characterization of convex geometries via a multivariate version of an identity that was originally proved by Maneva, Mossel and Wainwright for certain combinatorial objects arising in the context of the k-SAT problem. We thus highlight the connection between various characterizations of convex geometries and a family of removal processes studied in the literature on random structures.<|reference_end|> | arxiv | @article{ardila2007pruning,
title={Pruning Processes and a New Characterization of Convex Geometries},
author={Federico Ardila and Elitza Maneva},
journal={arXiv preprint arXiv:0706.3750},
year={2007},
archivePrefix={arXiv},
eprint={0706.3750},
primaryClass={math.CO cs.DM math.PR}
} | ardila2007pruning |
arxiv-613 | 0706.3752 | Secure Nested Codes for Type II Wiretap Channels | <|reference_start|>Secure Nested Codes for Type II Wiretap Channels: This paper considers the problem of secure coding design for a type II wiretap channel, where the main channel is noiseless and the eavesdropper channel is a general binary-input symmetric-output memoryless channel. The proposed secure error-correcting code has a nested code structure. Two secure nested coding schemes are studied for a type II Gaussian wiretap channel. The nesting is based on cosets of a good code sequence for the first scheme and on cosets of the dual of a good code sequence for the second scheme. In each case, the corresponding achievable rate-equivocation pair is derived based on the threshold behavior of good code sequences. The two secure coding schemes together establish an achievable rate-equivocation region, which almost covers the secrecy capacity-equivocation region in this case study. The proposed secure coding scheme is extended to a type II binary symmetric wiretap channel. A new achievable perfect secrecy rate, which improves upon the previously reported result by Thangaraj et al., is derived for this channel.<|reference_end|> | arxiv | @article{liu2007secure,
title={Secure Nested Codes for Type II Wiretap Channels},
author={Ruoheng Liu, Yingbin Liang, H. Vincent Poor, and Predrag Spasojevic},
journal={arXiv preprint arXiv:0706.3752},
year={2007},
doi={10.1109/ITW.2007.4313097},
archivePrefix={arXiv},
eprint={0706.3752},
primaryClass={cs.IT cs.CR math.IT}
} | liu2007secure |
arxiv-614 | 0706.3753 | Multiple Access Channels with Generalized Feedback and Confidential Messages | <|reference_start|>Multiple Access Channels with Generalized Feedback and Confidential Messages: This paper considers the problem of secret communication over a multiple access channel with generalized feedback. Two trusted users send independent confidential messages to an intended receiver, in the presence of a passive eavesdropper. In this setting, an active cooperation between two trusted users is enabled through using channel feedback in order to improve the communication efficiency. Based on rate-splitting and decode-and-forward strategies, achievable secrecy rate regions are derived for both discrete memoryless and Gaussian channels. Results show that channel feedback improves the achievable secrecy rates.<|reference_end|> | arxiv | @article{tang2007multiple,
title={Multiple Access Channels with Generalized Feedback and Confidential
Messages},
author={Xiaojun Tang, Ruoheng Liu, Predrag Spasojevic, and H. Vincent Poor},
journal={arXiv preprint arXiv:0706.3753},
year={2007},
doi={10.1109/ITW.2007.4313144},
archivePrefix={arXiv},
eprint={0706.3753},
primaryClass={cs.IT math.IT}
} | tang2007multiple |
arxiv-615 | 0706.3768 | Dynamic Exploration of Networks: from general principles to the traceroute process | <|reference_start|>Dynamic Exploration of Networks: from general principles to the traceroute process: Dynamical processes taking place on real networks define on them evolving subnetworks whose topology is not necessarily the same of the underlying one. We investigate the problem of determining the emerging degree distribution, focusing on a class of tree-like processes, such as those used to explore the Internet's topology. A general theory based on mean-field arguments is proposed, both for single-source and multiple-source cases, and applied to the specific example of the traceroute exploration of networks. Our results provide a qualitative improvement in the understanding of dynamical sampling and of the interplay between dynamics and topology in large networks like the Internet.<|reference_end|> | arxiv | @article{dall'asta2007dynamic,
title={Dynamic Exploration of Networks: from general principles to the
traceroute process},
author={Luca Dall'Asta},
journal={arXiv preprint arXiv:0706.3768},
year={2007},
doi={10.1140/epjb/e2007-00326-9},
archivePrefix={arXiv},
eprint={0706.3768},
primaryClass={physics.soc-ph cond-mat.dis-nn cs.NI physics.data-an}
} | dall'asta2007dynamic |
arxiv-616 | 0706.3812 | Java Components Vulnerabilities - An Experimental Classification Targeted at the OSGi Platform | <|reference_start|>Java Components Vulnerabilities - An Experimental Classification Targeted at the OSGi Platform: The OSGi Platform finds a growing interest in two different applications domains: embedded systems, and applications servers. However, the security properties of this platform are hardly studied, which is likely to hinder its use in production systems. This is all the more important that the dynamic aspect of OSGi-based applications, that can be extended at runtime, make them vulnerable to malicious code injection. We therefore perform a systematic audit of the OSGi platform so as to build a vulnerability catalog that intends to reference OSGi Vulnerabilities originating in the Core Specification, and in behaviors related to the use of the Java language. Standard Services are not considered. To support this audit, a Semi-formal Vulnerability Pattern is defined, that enables to uniquely characterize fundamental properties for each vulnerability, to include verbose description in the pattern, to reference known security protections, and to track the implementation status of the proof-of-concept OSGi Bundles that exploit the vulnerability. Based on the analysis of the catalog, a robust OSGi Platform is built, and recommendations are made to enhance the OSGi Specifications.<|reference_end|> | arxiv | @article{parrend2007java,
title={Java Components Vulnerabilities - An Experimental Classification
Targeted at the OSGi Platform},
author={Pierre Parrend (INRIA Rh^one-Alpes), St'ephane Fr'enot (INRIA
Rh^one-Alpes)},
journal={arXiv preprint arXiv:0706.3812},
year={2007},
archivePrefix={arXiv},
eprint={0706.3812},
primaryClass={cs.CR cs.OS}
} | parrend2007java |
arxiv-617 | 0706.3834 | Design of optimal convolutional codes for joint decoding of correlated sources in wireless sensor networks | <|reference_start|>Design of optimal convolutional codes for joint decoding of correlated sources in wireless sensor networks: We consider a wireless sensors network scenario where two nodes detect correlated sources and deliver them to a central collector via a wireless link. Differently from the Slepian-Wolf approach to distributed source coding, in the proposed scenario the sensing nodes do not perform any pre-compression of the sensed data. Original data are instead independently encoded by means of low-complexity convolutional codes. The decoder performs joint decoding with the aim of exploiting the inherent correlation between the transmitted sources. Complexity at the decoder is kept low thanks to the use of an iterative joint decoding scheme, where the output of each decoder is fed to the other decoder's input as a-priori information. For such scheme, we derive a novel analytical framework for evaluating an upper bound of joint-detection packet error probability and for deriving the optimum coding scheme. Experimental results confirm the validity of the analytical framework, and show that recursive codes allow a noticeable performance gain with respect to non-recursive coding schemes. Moreover, the proposed recursive coding scheme allows to approach the ideal Slepian-Wolf scheme performance in AWGN channel, and to clearly outperform it over fading channels on account of diversity gain due to correlation of information.<|reference_end|> | arxiv | @article{abrardo2007design,
title={Design of optimal convolutional codes for joint decoding of correlated
sources in wireless sensor networks},
author={A. Abrardo},
journal={arXiv preprint arXiv:0706.3834},
year={2007},
archivePrefix={arXiv},
eprint={0706.3834},
primaryClass={cs.IT math.IT}
} | abrardo2007design |
arxiv-618 | 0706.3846 | Opportunistic Scheduling and Beamforming for MIMO-SDMA Downlink Systems with Linear Combining | <|reference_start|>Opportunistic Scheduling and Beamforming for MIMO-SDMA Downlink Systems with Linear Combining: Opportunistic scheduling and beamforming schemes are proposed for multiuser MIMO-SDMA downlink systems with linear combining in this work. Signals received from all antennas of each mobile terminal (MT) are linearly combined to improve the {\em effective} signal-to-noise-interference ratios (SINRs). By exploiting limited feedback on the effective SINRs, the base station (BS) schedules simultaneous data transmission on multiple beams to the MTs with the largest effective SINRs. Utilizing the extreme value theory, we derive the asymptotic system throughputs and scaling laws for the proposed scheduling and beamforming schemes with different linear combining techniques. Computer simulations confirm that the proposed schemes can substantially improve the system throughput.<|reference_end|> | arxiv | @article{pun2007opportunistic,
title={Opportunistic Scheduling and Beamforming for MIMO-SDMA Downlink Systems
with Linear Combining},
author={Man-On Pun, Visa Koivunen and H. Vincent Poor},
journal={arXiv preprint arXiv:0706.3846},
year={2007},
doi={10.1109/PIMRC.2007.4394179},
archivePrefix={arXiv},
eprint={0706.3846},
primaryClass={cs.IT math.IT}
} | pun2007opportunistic |
arxiv-619 | 0706.3848 | Minimum Sum Edge Colorings of Multicycles | <|reference_start|>Minimum Sum Edge Colorings of Multicycles: In the minimum sum edge coloring problem, we aim to assign natural numbers to edges of a graph, so that adjacent edges receive different numbers, and the sum of the numbers assigned to the edges is minimum. The {\em chromatic edge strength} of a graph is the minimum number of colors required in a minimum sum edge coloring of this graph. We study the case of multicycles, defined as cycles with parallel edges, and give a closed-form expression for the chromatic edge strength of a multicycle, thereby extending a theorem due to Berge. It is shown that the minimum sum can be achieved with a number of colors equal to the chromatic index. We also propose simple algorithms for finding a minimum sum edge coloring of a multicycle. Finally, these results are generalized to a large family of minimum cost coloring problems.<|reference_end|> | arxiv | @article{cardinal2007minimum,
title={Minimum Sum Edge Colorings of Multicycles},
author={Jean Cardinal (ULB), Vlady Ravelomanana (LIPN), Mario Valencia-Pabon
(LIPN)},
journal={arXiv preprint arXiv:0706.3848},
year={2007},
archivePrefix={arXiv},
eprint={0706.3848},
primaryClass={cs.DM}
} | cardinal2007minimum |
arxiv-620 | 0706.3856 | Approximations of Lovasz extensions and their induced interaction index | <|reference_start|>Approximations of Lovasz extensions and their induced interaction index: The Lovasz extension of a pseudo-Boolean function $f : \{0,1\}^n \to R$ is defined on each simplex of the standard triangulation of $[0,1]^n$ as the unique affine function $\hat f : [0,1]^n \to R$ that interpolates $f$ at the $n+1$ vertices of the simplex. Its degree is that of the unique multilinear polynomial that expresses $f$. In this paper we investigate the least squares approximation problem of an arbitrary Lovasz extension $\hat f$ by Lovasz extensions of (at most) a specified degree. We derive explicit expressions of these approximations. The corresponding approximation problem for pseudo-Boolean functions was investigated by Hammer and Holzman (1992) and then solved explicitly by Grabisch, Marichal, and Roubens (2000), giving rise to an alternative definition of Banzhaf interaction index. Similarly we introduce a new interaction index from approximations of $\hat f$ and we present some of its properties. It turns out that its corresponding power index identifies with the power index introduced by Grabisch and Labreuche (2001).<|reference_end|> | arxiv | @article{marichal2007approximations,
title={Approximations of Lovasz extensions and their induced interaction index},
author={Jean-Luc Marichal, Pierre Mathonet},
journal={Discrete Applied Mathematics 156 (1) (2008) 11-24},
year={2007},
archivePrefix={arXiv},
eprint={0706.3856},
primaryClass={math.CO cs.DM}
} | marichal2007approximations |
arxiv-621 | 0706.3865 | Bid Optimization for Internet Graphical Ad Auction Systems via Special Ordered Sets | <|reference_start|>Bid Optimization for Internet Graphical Ad Auction Systems via Special Ordered Sets: This paper describes an optimization model for setting bid levels for certain types of advertisements on web pages. This model is non-convex, but we are able to obtain optimal or near-optimal solutions rapidly using branch and cut open-source software. The financial benefits obtained using the prototype system have been substantial.<|reference_end|> | arxiv | @article{wiggins2007bid,
title={Bid Optimization for Internet Graphical Ad Auction Systems via Special
Ordered Sets},
author={Ralphe Wiggins and John A. Tomlin},
journal={arXiv preprint arXiv:0706.3865},
year={2007},
number={YR-2007-004},
archivePrefix={arXiv},
eprint={0706.3865},
primaryClass={cs.DM}
} | wiggins2007bid |
arxiv-622 | 0706.3984 | A Comparison of Push and Pull Techniques for Ajax | <|reference_start|>A Comparison of Push and Pull Techniques for Ajax: Ajax applications are designed to have high user interactivity and low user-perceived latency. Real-time dynamic web data such as news headlines, stock tickers, and auction updates need to be propagated to the users as soon as possible. However, Ajax still suffers from the limitations of the Web's request/response architecture which prevents servers from pushing real-time dynamic web data. Such applications usually use a pull style to obtain the latest updates, where the client actively requests the changes based on a predefined interval. It is possible to overcome this limitation by adopting a push style of interaction where the server broadcasts data when a change occurs on the server side. Both these options have their own trade-offs. This paper explores the fundamental limits of browser-based applications and analyzes push solutions for Ajax technology. It also shows the results of an empirical study comparing push and pull.<|reference_end|> | arxiv | @article{bozdag2007a,
title={A Comparison of Push and Pull Techniques for Ajax},
author={Engin Bozdag, Ali Mesbah, Arie van Deursen},
journal={arXiv preprint arXiv:0706.3984},
year={2007},
archivePrefix={arXiv},
eprint={0706.3984},
primaryClass={cs.SE cs.PF}
} | bozdag2007a |
arxiv-623 | 0706.4004 | End-to-End Available Bandwidth Measurement Tools : A Comparative Evaluation of Performances | <|reference_start|>End-to-End Available Bandwidth Measurement Tools : A Comparative Evaluation of Performances: In recent years, there has been a strong interest in measuring the available bandwidth of network paths. Several methods and techniques have been proposed and various measurement tools have been developed and evaluated. However, there have been few comparative studies with regards to the actual performance of these tools. This paper presents a study of available bandwidth measurement techniques and undertakes a comparative analysis in terms of accuracy, intrusiveness and response time of active probing tools. Finally, measurement errors and the uncertainty of the tools are analysed and overall conclusions made.<|reference_end|> | arxiv | @article{ali2007end-to-end,
title={End-to-End Available Bandwidth Measurement Tools : A Comparative
Evaluation of Performances},
author={Ahmed Ait Ali (CRAN), Fabien Michaut (CRAN), Francis Lepage (CRAN)},
journal={IPS-MoMe 2006 IEEE /ACM International workshop on Internet
Performance, Simulation, Monitoring and Measurement, Autriche (27/02/2005) 13},
year={2007},
archivePrefix={arXiv},
eprint={0706.4004},
primaryClass={cs.NI}
} | ali2007end-to-end |
arxiv-624 | 0706.4009 | Multi-criteria scheduling of pipeline workflows | <|reference_start|>Multi-criteria scheduling of pipeline workflows: Mapping workflow applications onto parallel platforms is a challenging problem, even for simple application patterns such as pipeline graphs. Several antagonist criteria should be optimized, such as throughput and latency (or a combination). In this paper, we study the complexity of the bi-criteria mapping problem for pipeline graphs on communication homogeneous platforms. In particular, we assess the complexity of the well-known chains-to-chains problem for different-speed processors, which turns out to be NP-hard. We provide several efficient polynomial bi-criteria heuristics, and their relative performance is evaluated through extensive simulations.<|reference_end|> | arxiv | @article{benoit2007multi-criteria,
title={Multi-criteria scheduling of pipeline workflows},
author={Anne Benoit (INRIA Rh^one-Alpes, LIP), Veronika Rehn-Sonigo (INRIA
Rh^one-Alpes, LIP), Yves Robert (INRIA Rh^one-Alpes, LIP)},
journal={arXiv preprint arXiv:0706.4009},
year={2007},
archivePrefix={arXiv},
eprint={0706.4009},
primaryClass={cs.DC}
} | benoit2007multi-criteria |
arxiv-625 | 0706.4015 | Self-Stabilizing Wavelets and r-Hops Coordination | <|reference_start|>Self-Stabilizing Wavelets and r-Hops Coordination: We introduce a simple tool called the wavelet (or, r-wavelet) scheme. Wavelets deals with coordination among processes which are at most r hops away of each other. We present a selfstabilizing solution for this scheme. Our solution requires no underlying structure and works in arbritrary anonymous networks, i.e., no process identifier is required. Moreover, our solution works under any (even unfair) daemon. Next, we use the wavelet scheme to design self-stabilizing layer clocks. We show that they provide an efficient device in the design of local coordination problems at distance r, i.e., r-barrier synchronization and r-local resource allocation (LRA) such as r-local mutual exclusion (LME), r-group mutual exclusion (GME), and r-Reader/Writers. Some solutions to the r-LRA problem (e.g., r-LME) also provide transformers to transform algorithms written assuming any r-central daemon into algorithms working with any distributed daemon.<|reference_end|> | arxiv | @article{boulinier2007self-stabilizing,
title={Self-Stabilizing Wavelets and r-Hops Coordination},
author={Christian Boulinier (LaRIA), Franck Petit (LaRIA)},
journal={Rapport Interne (01/04/2007)},
year={2007},
archivePrefix={arXiv},
eprint={0706.4015},
primaryClass={cs.DC}
} | boulinier2007self-stabilizing |
arxiv-626 | 0706.4035 | Encounter-based worms: Analysis and Defense | <|reference_start|>Encounter-based worms: Analysis and Defense: Encounter-based network is a frequently-disconnected wireless ad-hoc network requiring immediate neighbors to store and forward aggregated data for information disseminations. Using traditional approaches such as gateways or firewalls for deterring worm propagation in encounter-based networks is inappropriate. We propose the worm interaction approach that relies upon automated beneficial worm generation aiming to alleviate problems of worm propagations in such networks. To understand the dynamic of worm interactions and its performance, we mathematically model worm interactions based on major worm interaction factors including worm interaction types, network characteristics, and node characteristics using ordinary differential equations and analyze their effects on our proposed metrics. We validate our proposed model using extensive synthetic and trace-driven simulations. We find that, all worm interaction factors significantly affect the pattern of worm propagations. For example, immunization linearly decreases the infection of susceptible nodes while on-off behavior only impacts the duration of infection. Using realistic mobile network measurements, we find that encounters are bursty, multi-group and non-uniform. The trends from the trace-driven simulations are consistent with the model, in general. Immunization and timely deployment seem to be the most effective to counter the worm attacks in such scenarios while cooperation may help in a specific case. These findings provide insight that we hope would aid to develop counter-worm protocols in future encounter-based networks.<|reference_end|> | arxiv | @article{tanachaiwiwat2007encounter-based,
title={Encounter-based worms: Analysis and Defense},
author={Sapon Tanachaiwiwat, Ahmed Helmy},
journal={arXiv preprint arXiv:0706.4035},
year={2007},
archivePrefix={arXiv},
eprint={0706.4035},
primaryClass={cs.NI cs.CR}
} | tanachaiwiwat2007encounter-based |
arxiv-627 | 0706.4038 | Scheduling multiple divisible loads on a linear processor network | <|reference_start|>Scheduling multiple divisible loads on a linear processor network: Min, Veeravalli, and Barlas have recently proposed strategies to minimize the overall execution time of one or several divisible loads on a heterogeneous linear network, using one or more installments. We show on a very simple example that their approach does not always produce a solution and that, when it does, the solution is often suboptimal. We also show how to find an optimal schedule for any instance, once the number of installments per load is given. Then, we formally state that any optimal schedule has an infinite number of installments under a linear cost model as the one assumed in the original papers. Therefore, such a cost model cannot be used to design practical multi-installment strategies. Finally, through extensive simulations we confirmed that the best solution is always produced by the linear programming approach, while solutions of the original papers can be far away from the optimal.<|reference_end|> | arxiv | @article{gallet2007scheduling,
title={Scheduling multiple divisible loads on a linear processor network},
author={Matthieu Gallet (LIP, INRIA Rh^one-Alpes), Yves Robert (LIP, INRIA
Rh^one-Alpes), Fr'ed'eric Vivien (LIP, INRIA Rh^one-Alpes)},
journal={arXiv preprint arXiv:0706.4038},
year={2007},
archivePrefix={arXiv},
eprint={0706.4038},
primaryClass={cs.DC}
} | gallet2007scheduling |
arxiv-628 | 0706.4044 | PSPACE Bounds for Rank-1 Modal Logics | <|reference_start|>PSPACE Bounds for Rank-1 Modal Logics: For lack of general algorithmic methods that apply to wide classes of logics, establishing a complexity bound for a given modal logic is often a laborious task. The present work is a step towards a general theory of the complexity of modal logics. Our main result is that all rank-1 logics enjoy a shallow model property and thus are, under mild assumptions on the format of their axiomatisation, in PSPACE. This leads to a unified derivation of tight PSPACE-bounds for a number of logics including K, KD, coalition logic, graded modal logic, majority logic, and probabilistic modal logic. Our generic algorithm moreover finds tableau proofs that witness pleasant proof-theoretic properties including a weak subformula property. This generality is made possible by a coalgebraic semantics, which conveniently abstracts from the details of a given model class and thus allows covering a broad range of logics in a uniform way.<|reference_end|> | arxiv | @article{schröder2007pspace,
title={PSPACE Bounds for Rank-1 Modal Logics},
author={Lutz Schr"oder and Dirk Pattinson},
journal={ACM Transactions on Computational Logic 10 (2:13), pp. 1-33, 2009},
year={2007},
doi={10.1145/1462179.1462185},
number={Imperial College TR 2007/4},
archivePrefix={arXiv},
eprint={0706.4044},
primaryClass={cs.LO cs.CC}
} | schröder2007pspace |
arxiv-629 | 0706.4048 | Getting More From Your Multicore: Exploiting OpenMP From An Open Source Numerical Scripting Language | <|reference_start|>Getting More From Your Multicore: Exploiting OpenMP From An Open Source Numerical Scripting Language: We introduce SLIRP, a module generator for the S-Lang numerical scripting language, with a focus on its vectorization capabilities. We demonstrate how both SLIRP and S-Lang were easily adapted to exploit the inherent parallelism of high-level mathematical languages with OpenMP, allowing general users to employ tightly-coupled multiprocessors in scriptable research calculations while requiring no special knowledge of parallel programming. Motivated by examples in the ISIS astrophysical modeling & analysis tool, performance figures are presented for several machine and compiler configurations, demonstrating beneficial speedups for real-world operations.<|reference_end|> | arxiv | @article{noble2007getting,
title={Getting More From Your Multicore: Exploiting OpenMP From An Open Source
Numerical Scripting Language},
author={Michael S. Noble},
journal={arXiv preprint arXiv:0706.4048},
year={2007},
archivePrefix={arXiv},
eprint={0706.4048},
primaryClass={cs.DC astro-ph}
} | noble2007getting |
arxiv-630 | 0706.4095 | Some Quantitative Aspects of Fractional Computability | <|reference_start|>Some Quantitative Aspects of Fractional Computability: Motivated by results on generic-case complexity in group theory, we apply the ideas of effective Baire category and effective measure theory to study complexity classes of functions which are "fractionally computable" by a partial algorithm. For this purpose it is crucial to specify an allowable effective density, $\delta$, of convergence for a partial algorithm. The set $\mathcal{FC}(\delta)$ consists of all total functions $ f: \Sigma^\ast \to \{0,1 \}$ where $\Sigma$ is a finite alphabet with $|\Sigma| \ge 2$ which are "fractionally computable at density $\delta$". The space $\mathcal{FC}(\delta) $ is effectively of the second category while any fractional complexity class, defined using $\delta$ and any computable bound $\beta$ with respect to an abstract Blum complexity measure, is effectively meager. A remarkable result of Kautz and Miltersen shows that relative to an algorithmically random oracle $A$, the relativized class $\mathcal{NP}^A$ does not have effective polynomial measure zero in $\mathcal{E}^A$, the relativization of strict exponential time. We define the class $\mathcal{UFP}^A$ of all languages which are fractionally decidable in polynomial time at ``a uniform rate'' by algorithms with an oracle for $A$. We show that this class does have effective polynomial measure zero in $\mathcal{E}^A$ for every oracle $A$. Thus relaxing the requirement of polynomial time decidability to hold only for a fraction of possible inputs does not compensate for the power of nondeterminism in the case of random oracles.<|reference_end|> | arxiv | @article{kapovich2007some,
title={Some Quantitative Aspects of Fractional Computability},
author={Ilya Kapovich and Paul Schupp},
journal={arXiv preprint arXiv:0706.4095},
year={2007},
archivePrefix={arXiv},
eprint={0706.4095},
primaryClass={math.GR cs.CC}
} | kapovich2007some |
arxiv-631 | 0706.4107 | Radix Sorting With No Extra Space | <|reference_start|>Radix Sorting With No Extra Space: It is well known that n integers in the range [1,n^c] can be sorted in O(n) time in the RAM model using radix sorting. More generally, integers in any range [1,U] can be sorted in O(n sqrt{loglog n}) time. However, these algorithms use O(n) words of extra memory. Is this necessary? We present a simple, stable, integer sorting algorithm for words of size O(log n), which works in O(n) time and uses only O(1) words of extra memory on a RAM model. This is the integer sorting case most useful in practice. We extend this result with same bounds to the case when the keys are read-only, which is of theoretical interest. Another interesting question is the case of arbitrary c. Here we present a black-box transformation from any RAM sorting algorithm to a sorting algorithm which uses only O(1) extra space and has the same running time. This settles the complexity of in-place sorting in terms of the complexity of sorting.<|reference_end|> | arxiv | @article{franceschini2007radix,
title={Radix Sorting With No Extra Space},
author={Gianni Franceschini, S. Muthukrishnan and Mihai Patrascu},
journal={arXiv preprint arXiv:0706.4107},
year={2007},
archivePrefix={arXiv},
eprint={0706.4107},
primaryClass={cs.DS}
} | franceschini2007radix |
arxiv-632 | 0706.4161 | The Domino Problem of the Hyperbolic Plane Is Undecidable | <|reference_start|>The Domino Problem of the Hyperbolic Plane Is Undecidable: In this paper, we prove that the general tiling problem of the hyperbolic plane is undecidable by proving a slightly stronger version using only a regular polygon as the basic shape of the tiles. The problem was raised by a paper of Raphael Robinson in 1971, in his famous simplified proof that the general tiling problem is undecidable for the Euclidean plane, initially proved by Robert Berger in 1966.<|reference_end|> | arxiv | @article{margenstern2007the,
title={The Domino Problem of the Hyperbolic Plane Is Undecidable},
author={Maurice Margenstern},
journal={The Bulletin of EATCS, 93(Oct.), (2007), 220-237},
year={2007},
archivePrefix={arXiv},
eprint={0706.4161},
primaryClass={cs.CG cs.DM}
} | margenstern2007the |
arxiv-633 | 0706.4170 | Hilbert++ Manual | <|reference_start|>Hilbert++ Manual: We present here an installation guide, a hand-on mini-tutorial through examples, and the theoretical foundations of the Hilbert++ code.<|reference_end|> | arxiv | @article{mirone2007hilbert++,
title={Hilbert++ Manual},
author={Alessandro Mirone},
journal={arXiv preprint arXiv:0706.4170},
year={2007},
archivePrefix={arXiv},
eprint={0706.4170},
primaryClass={cs.OH cond-mat.str-el}
} | mirone2007hilbert++ |
arxiv-634 | 0706.4175 | Heuristics for Network Coding in Wireless Networks | <|reference_start|>Heuristics for Network Coding in Wireless Networks: Multicast is a central challenge for emerging multi-hop wireless architectures such as wireless mesh networks, because of its substantial cost in terms of bandwidth. In this report, we study one specific case of multicast: broadcasting, sending data from one source to all nodes, in a multi-hop wireless network. The broadcast we focus on is based on network coding, a promising avenue for reducing cost; previous work of ours showed that the performance of network coding with simple heuristics is asymptotically optimal: each transmission is beneficial to nearly every receiver. This is for homogenous and large networks of the plan. But for small, sparse or for inhomogeneous networks, some additional heuristics are required. This report proposes such additional new heuristics (for selecting rates) for broadcasting with network coding. Our heuristics are intended to use only simple local topology information. We detail the logic of the heuristics, and with experimental results, we illustrate the behavior of the heuristics, and demonstrate their excellent performance.<|reference_end|> | arxiv | @article{cho2007heuristics,
title={Heuristics for Network Coding in Wireless Networks},
author={Song Yean Cho (INRIA Rocquencourt), C'edric Adjih (INRIA
Rocquencourt), Philippe Jacquet (INRIA Rocquencourt)},
journal={arXiv preprint arXiv:0706.4175},
year={2007},
archivePrefix={arXiv},
eprint={0706.4175},
primaryClass={cs.NI}
} | cho2007heuristics |
arxiv-635 | 0706.4224 | User driven applications - new design paradigm | <|reference_start|>User driven applications - new design paradigm: Programs for complicated engineering and scientific tasks always have to deal with a problem of showing numerous graphical results. The limits of the screen space and often opposite requirements from different users are the cause of the infinite discussions between designers and users, but the source of this ongoing conflict is not in the level of interface design, but in the basic principle of current graphical output: user may change some views and details, but in general the output view is absolutely defined and fixed by the developer. Author was working for several years on the algorithm that will allow eliminating this problem thus allowing stepping from designer-driven applications to user-driven. Such type of applications in which user is deciding what, when and how to show on the screen, is the dream of scientists and engineers working on the analysis of the most complicated tasks. The new paradigm is based on movable and resizable graphics, and such type of graphics can be widely used not only for scientific and engineering applications.<|reference_end|> | arxiv | @article{andreyev2007user,
title={User driven applications - new design paradigm},
author={Sergey Andreyev},
journal={arXiv preprint arXiv:0706.4224},
year={2007},
archivePrefix={arXiv},
eprint={0706.4224},
primaryClass={cs.GR cs.HC}
} | andreyev2007user |
arxiv-636 | 0706.4298 | Unison as a Self-Stabilizing Wave Stream Algorithm in Asynchronous Anonymous Networks | <|reference_start|>Unison as a Self-Stabilizing Wave Stream Algorithm in Asynchronous Anonymous Networks: How to pass from local to global scales in anonymous networks? How to organize a selfstabilizing propagation of information with feedback. From the Angluin impossibility results, we cannot elect a leader in a general anonymous network. Thus, it is impossible to build a rooted spanning tree. Many problems can only be solved by probabilistic methods. In this paper we show how to use Unison to design a self-stabilizing barrier synchronization in an anonymous network. We show that the commuication structure of this barrier synchronization designs a self-stabilizing wave-stream, or pipelining wave, in anonymous networks. We introduce two variants of Wave: the strong waves and the wavelets. A strong wave can be used to solve the idempotent r-operator parametrized computation problem. A wavelet deals with k-distance computation. We show how to use Unison to design a self-stabilizing wave stream, a self-stabilizing strong wave stream and a self-stabilizing wavelet stream.<|reference_end|> | arxiv | @article{boulinier2007unison,
title={Unison as a Self-Stabilizing Wave Stream Algorithm in Asynchronous
Anonymous Networks},
author={Christian Boulinier (LaRIA)},
journal={Rapport de recherche. (28/06/2007)},
year={2007},
archivePrefix={arXiv},
eprint={0706.4298},
primaryClass={cs.DC}
} | boulinier2007unison |
arxiv-637 | 0706.4323 | Theory of Finite or Infinite Trees Revisited | <|reference_start|>Theory of Finite or Infinite Trees Revisited: We present in this paper a first-order axiomatization of an extended theory $T$ of finite or infinite trees, built on a signature containing an infinite set of function symbols and a relation $\fini(t)$ which enables to distinguish between finite or infinite trees. We show that $T$ has at least one model and prove its completeness by giving not only a decision procedure, but a full first-order constraint solver which gives clear and explicit solutions for any first-order constraint satisfaction problem in $T$. The solver is given in the form of 16 rewriting rules which transform any first-order constraint $\phi$ into an equivalent disjunction $\phi$ of simple formulas such that $\phi$ is either the formula $\true$ or the formula $\false$ or a formula having at least one free variable, being equivalent neither to $\true$ nor to $\false$ and where the solutions of the free variables are expressed in a clear and explicit way. The correctness of our rules implies the completeness of $T$. We also describe an implementation of our algorithm in CHR (Constraint Handling Rules) and compare the performance with an implementation in C++ and that of a recent decision procedure for decomposable theories.<|reference_end|> | arxiv | @article{djelloul2007theory,
title={Theory of Finite or Infinite Trees Revisited},
author={Khalil Djelloul, Thi-bich-hanh Dao and Thom Fruehwirth},
journal={arXiv preprint arXiv:0706.4323},
year={2007},
archivePrefix={arXiv},
eprint={0706.4323},
primaryClass={cs.LO cs.AI}
} | djelloul2007theory |
arxiv-638 | 0706.4375 | A Robust Linguistic Platform for Efficient and Domain specific Web Content Analysis | <|reference_start|>A Robust Linguistic Platform for Efficient and Domain specific Web Content Analysis: Web semantic access in specific domains calls for specialized search engines with enhanced semantic querying and indexing capacities, which pertain both to information retrieval (IR) and to information extraction (IE). A rich linguistic analysis is required either to identify the relevant semantic units to index and weight them according to linguistic specific statistical distribution, or as the basis of an information extraction process. Recent developments make Natural Language Processing (NLP) techniques reliable enough to process large collections of documents and to enrich them with semantic annotations. This paper focuses on the design and the development of a text processing platform, Ogmios, which has been developed in the ALVIS project. The Ogmios platform exploits existing NLP modules and resources, which may be tuned to specific domains and produces linguistically annotated documents. We show how the three constraints of genericity, domain semantic awareness and performance can be handled all together.<|reference_end|> | arxiv | @article{hamon2007a,
title={A Robust Linguistic Platform for Efficient and Domain specific Web
Content Analysis},
author={Thierry Hamon (LIPN), Adeline Nazarenko (LIPN), Thierry Poibeau
(LIPN), Sophie Aubin (LIPN), Julien Derivi`ere (LIPN)},
journal={Proceedings of RIAO 2007 (30/05/2007)},
year={2007},
archivePrefix={arXiv},
eprint={0706.4375},
primaryClass={cs.AI}
} | hamon2007a |
arxiv-639 | 0706.4440 | 2-State 3-Symbol Universal Turing Machines Do Not Exist | <|reference_start|>2-State 3-Symbol Universal Turing Machines Do Not Exist: In this brief note, we give a simple information-theoretic proof that 2-state 3-symbol universal Turing machines cannot possibly exist, unless one loosens the definition of "universal".<|reference_end|> | arxiv | @article{feinstein20072-state,
title={2-State 3-Symbol Universal Turing Machines Do Not Exist},
author={Craig Alan Feinstein},
journal={arXiv preprint arXiv:0706.4440},
year={2007},
archivePrefix={arXiv},
eprint={0706.4440},
primaryClass={cs.OH}
} | feinstein20072-state |
arxiv-640 | 0707.0050 | Non-atomic Games for Multi-User Systems | <|reference_start|>Non-atomic Games for Multi-User Systems: In this contribution, the performance of a multi-user system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). We consider the realistic case of frequency selective channels for uplink CDMA. This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the Matched filter, the MMSE filter and the optimum filter. The goal of this paper is to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.<|reference_end|> | arxiv | @article{bonneau2007non-atomic,
title={Non-atomic Games for Multi-User Systems},
author={Nicolas Bonneau, M'erouane Debbah, Eitan Altman, Are Hj{o}rungnes},
journal={arXiv preprint arXiv:0707.0050},
year={2007},
doi={10.1109/JSAC.2008.080903},
archivePrefix={arXiv},
eprint={0707.0050},
primaryClass={cs.IT cs.GT math.IT}
} | bonneau2007non-atomic |
arxiv-641 | 0707.0181 | Location and Spectral Estimation of Weak Wave Packets on Noise Background | <|reference_start|>Location and Spectral Estimation of Weak Wave Packets on Noise Background: The method of location and spectral estimation of weak signals on a noise background is being considered. The method is based on the optimized on order and noise dispersion autoregressive model of a sought signal. A new approach of model order determination is being offered. Available estimation of the noise dispersion is close to the real one. The optimized model allows to define function of empirical data spectral and dynamic features changes. The analysis of the signal as dynamic invariant in respect of the linear shift transformation yields the function of model consistency. Use of these both functions enables to detect short-time and nonstationary wave packets at signal to noise ratio as from -20 dB and above.<|reference_end|> | arxiv | @article{bunyak2007location,
title={Location and Spectral Estimation of Weak Wave Packets on Noise
Background},
author={Yu. Bunyak and O. Bunyak},
journal={arXiv preprint arXiv:0707.0181},
year={2007},
archivePrefix={arXiv},
eprint={0707.0181},
primaryClass={cs.CE}
} | bunyak2007location |
arxiv-642 | 0707.0234 | Selection Relaying at Low Signal to Noise Ratios | <|reference_start|>Selection Relaying at Low Signal to Noise Ratios: Performance of cooperative diversity schemes at Low Signal to Noise Ratios (LSNR) was recently studied by Avestimehr et. al. [1] who emphasized the importance of diversity gain over multiplexing gain at low SNRs. It has also been pointed out that continuous energy transfer to the channel is necessary for achieving the max-flow min-cut bound at LSNR. Motivated by this we propose the use of Selection Decode and Forward (SDF) at LSNR and analyze its performance in terms of the outage probability. We also propose an energy optimization scheme which further brings down the outage probability.<|reference_end|> | arxiv | @article{rajawat2007selection,
title={Selection Relaying at Low Signal to Noise Ratios},
author={Ketan Rajawat and Adrish Banerjee},
journal={arXiv preprint arXiv:0707.0234},
year={2007},
archivePrefix={arXiv},
eprint={0707.0234},
primaryClass={cs.IT math.IT}
} | rajawat2007selection |
arxiv-643 | 0707.0282 | Directed Feedback Vertex Set is Fixed-Parameter Tractable | <|reference_start|>Directed Feedback Vertex Set is Fixed-Parameter Tractable: We resolve positively a long standing open question regarding the fixed-parameter tractability of the parameterized Directed Feedback Vertex Set problem. In particular, we propose an algorithm which solves this problem in $O(8^kk!*poly(n))$.<|reference_end|> | arxiv | @article{razgon2007directed,
title={Directed Feedback Vertex Set is Fixed-Parameter Tractable},
author={Igor Razgon and Barry O'Sullivan},
journal={arXiv preprint arXiv:0707.0282},
year={2007},
archivePrefix={arXiv},
eprint={0707.0282},
primaryClass={cs.DS cs.CC}
} | razgon2007directed |
arxiv-644 | 0707.0285 | A Generalized Sampling Theorem for Frequency Localized Signals | <|reference_start|>A Generalized Sampling Theorem for Frequency Localized Signals: A generalized sampling theorem for frequency localized signals is presented. The generalization in the proposed model of sampling is twofold: (1) It applies to various prefilters effecting a "soft" bandlimitation, (2) an approximate reconstruction from sample values rather than a perfect one is obtained (though the former might be "practically perfect" in many cases). For an arbitrary finite-energy signal the frequency localization is performed by a prefilter realizing a crosscorrelation with a function of prescribed properties. The range of the filter, the so-called localization space, is described in some detail. Regular sampling is applied and a reconstruction formula is given. For the reconstruction error a general error estimate is derived and connections between a critical sampling interval and notions of "soft bandwidth" for the prefilter are indicated. Examples based on the sinc-function, Gaussian functions and B-splines are discussed.<|reference_end|> | arxiv | @article{hammerich2007a,
title={A Generalized Sampling Theorem for Frequency Localized Signals},
author={Edwin Hammerich},
journal={Sampl. Theory Signal Image Process., Vol. 8, No. 2, May 2009, pp.
127-146},
year={2007},
archivePrefix={arXiv},
eprint={0707.0285},
primaryClass={cs.IT math.IT}
} | hammerich2007a |
arxiv-645 | 0707.0323 | Interference Alignment and the Degrees of Freedom for the K User Interference Channel | <|reference_start|>Interference Alignment and the Degrees of Freedom for the K User Interference Channel: While the best known outerbound for the K user interference channel states that there cannot be more than K/2 degrees of freedom, it has been conjectured that in general the constant interference channel with any number of users has only one degree of freedom. In this paper, we explore the spatial degrees of freedom per orthogonal time and frequency dimension for the K user wireless interference channel where the channel coefficients take distinct values across frequency slots but are fixed in time. We answer five closely related questions. First, we show that K/2 degrees of freedom can be achieved by channel design, i.e. if the nodes are allowed to choose the best constant, finite and nonzero channel coefficient values. Second, we show that if channel coefficients can not be controlled by the nodes but are selected by nature, i.e., randomly drawn from a continuous distribution, the total number of spatial degrees of freedom for the K user interference channel is almost surely K/2 per orthogonal time and frequency dimension. Thus, only half the spatial degrees of freedom are lost due to distributed processing of transmitted and received signals on the interference channel. Third, we show that interference alignment and zero forcing suffice to achieve all the degrees of freedom in all cases. Fourth, we show that the degrees of freedom $D$ directly lead to an $\mathcal{O}(1)$ capacity characterization of the form $C(SNR)=D\log(1+SNR)+\mathcal{O}(1)$ for the multiple access channel, the broadcast channel, the 2 user interference channel, the 2 user MIMO X channel and the 3 user interference channel with M>1 antennas at each node. Fifth, we characterize the degree of freedom benefits from cognitive sharing of messages on the 3 user interference channel.<|reference_end|> | arxiv | @article{cadambe2007interference,
title={Interference Alignment and the Degrees of Freedom for the K User
Interference Channel},
author={Viveck R. Cadambe, Syed A. Jafar},
journal={arXiv preprint arXiv:0707.0323},
year={2007},
archivePrefix={arXiv},
eprint={0707.0323},
primaryClass={cs.IT math.IT}
} | cadambe2007interference |
arxiv-646 | 0707.0336 | Pricing Options on Defaultable Stocks | <|reference_start|>Pricing Options on Defaultable Stocks: In this note, we develop stock option price approximations for a model which takes both the risk o default and the stochastic volatility into account. We also let the intensity of defaults be influenced by the volatility. We show that it might be possible to infer the risk neutral default intensity from the stock option prices. Our option price approximation has a rich implied volatility surface structure and fits the data implied volatility well. Our calibration exercise shows that an effective hazard rate from bonds issued by a company can be used to explain the implied volatility skew of the implied volatility of the option prices issued by the same company.<|reference_end|> | arxiv | @article{bayraktar2007pricing,
title={Pricing Options on Defaultable Stocks},
author={Erhan Bayraktar},
journal={arXiv preprint arXiv:0707.0336},
year={2007},
archivePrefix={arXiv},
eprint={0707.0336},
primaryClass={cs.CE}
} | bayraktar2007pricing |
arxiv-647 | 0707.0365 | Performance Analysis of Publish/Subscribe Systems | <|reference_start|>Performance Analysis of Publish/Subscribe Systems: The Desktop Grid offers solutions to overcome several challenges and to answer increasingly needs of scientific computing. Its technology consists mainly in exploiting resources, geographically dispersed, to treat complex applications needing big power of calculation and/or important storage capacity. However, as resources number increases, the need for scalability, self-organisation, dynamic reconfigurations, decentralisation and performance becomes more and more essential. Since such properties are exhibited by P2P systems, the convergence of grid computing and P2P computing seems natural. In this context, this paper evaluates the scalability and performance of P2P tools for discovering and registering services. Three protocols are used for this purpose: Bonjour, Avahi and Free-Pastry. We have studied the behaviour of theses protocols related to two criteria: the elapsed time for registrations services and the needed time to discover new services. Our aim is to analyse these results in order to choose the best protocol we can use in order to create a decentralised middleware for desktop grid.<|reference_end|> | arxiv | @article{abbes2007performance,
title={Performance Analysis of Publish/Subscribe Systems},
author={Heithem Abbes (UTIC), Christophe C'erin (LIPN), Jean-Christophe
Dubacq (LIPN), Mohamed Jemni (UTIC)},
journal={Rapport Interne LIPN (15/05/2007)},
year={2007},
archivePrefix={arXiv},
eprint={0707.0365},
primaryClass={cs.DC}
} | abbes2007performance |
arxiv-648 | 0707.0397 | Robust Audio Watermarking Against the D/A and A/D conversions | <|reference_start|>Robust Audio Watermarking Against the D/A and A/D conversions: Audio watermarking has played an important role in multimedia security. In many applications using audio watermarking, D/A and A/D conversions (denoted by DA/AD in this paper) are often involved. In previous works, however, the robustness issue of audio watermarking against the DA/AD conversions has not drawn sufficient attention yet. In our extensive investigation, it has been found that the degradation of a watermarked audio signal caused by the DA/AD conversions manifests itself mainly in terms of wave magnitude distortion and linear temporal scaling, making the watermark extraction failed. Accordingly, a DWT-based audio watermarking algorithm robust against the DA/AD conversions is proposed in this paper. To resist the magnitude distortion, the relative energy relationships among different groups of the DWT coefficients in the low-frequency sub-band are utilized in watermark embedding by adaptively controlling the embedding strength. Furthermore, the resynchronization is designed to cope with the linear temporal scaling. The time-frequency localization characteristics of DWT are exploited to save the computational load in the resynchronization. Consequently, the proposed audio watermarking algorithm is robust against the DA/AD conversions, other common audio processing manipulations, and the attacks in StirMark Benchmark for Audio, which has been verified by experiments.<|reference_end|> | arxiv | @article{xiang2007robust,
title={Robust Audio Watermarking Against the D/A and A/D conversions},
author={Shijun Xiang and Jiwu Huang},
journal={arXiv preprint arXiv:0707.0397},
year={2007},
archivePrefix={arXiv},
eprint={0707.0397},
primaryClass={cs.CR cs.MM}
} | xiang2007robust |
arxiv-649 | 0707.0421 | The $k$-anonymity Problem is Hard | <|reference_start|>The $k$-anonymity Problem is Hard: The problem of publishing personal data without giving up privacy is becoming increasingly important. An interesting formalization recently proposed is the k-anonymity. This approach requires that the rows in a table are clustered in sets of size at least k and that all the rows in a cluster become the same tuple, after the suppression of some records. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is known to be NP-hard when the values are over a ternary alphabet, k = 3 and the rows length is unbounded. In this paper we give a lower bound on the approximation factor that any polynomial-time algorithm can achive on two restrictions of the problem,namely (i) when the records values are over a binary alphabet and k = 3, and (ii) when the records have length at most 8 and k = 4, showing that these restrictions of the problem are APX-hard.<|reference_end|> | arxiv | @article{bonizzoni2007the,
title={The $k$-anonymity Problem is Hard},
author={Paola Bonizzoni, Gianluca Della Vedova, Riccardo Dondi},
journal={arXiv preprint arXiv:0707.0421},
year={2007},
archivePrefix={arXiv},
eprint={0707.0421},
primaryClass={cs.DB cs.CC cs.DS}
} | bonizzoni2007the |
arxiv-650 | 0707.0430 | Assisted Problem Solving and Decompositions of Finite Automata | <|reference_start|>Assisted Problem Solving and Decompositions of Finite Automata: A study of assisted problem solving formalized via decompositions of deterministic finite automata is initiated. The landscape of new types of decompositions of finite automata this study uncovered is presented. Languages with various degrees of decomposability between undecomposable and perfectly decomposable are shown to exist.<|reference_end|> | arxiv | @article{gaži2007assisted,
title={Assisted Problem Solving and Decompositions of Finite Automata},
author={Peter Gav{z}i, Branislav Rovan},
journal={arXiv preprint arXiv:0707.0430},
year={2007},
archivePrefix={arXiv},
eprint={0707.0430},
primaryClass={cs.CC}
} | gaži2007assisted |
arxiv-651 | 0707.0454 | Optimal Strategies for Gaussian Jamming in Block-Fading Channels under Delay and Power Constraints | <|reference_start|>Optimal Strategies for Gaussian Jamming in Block-Fading Channels under Delay and Power Constraints: Without assuming any knowledge on source's codebook and its output signals, we formulate a Gaussian jamming problem in block fading channels as a two-player zero sum game. The outage probability is adopted as an objective function, over which transmitter aims at minimization and jammer aims at maximization by selecting their power control strategies. Optimal power control strategies for each player are obtained under both short-term and long-term power constraints. For the latter case, we first prove the non-existence of a Nash equilibrium, and then provide a complete solution for both maxmin and minimax problems. Numerical results demonstrate a sharp difference between the outage probabilities of the minimax and maxmin solutions.<|reference_end|> | arxiv | @article{amariucai2007optimal,
title={Optimal Strategies for Gaussian Jamming in Block-Fading Channels under
Delay and Power Constraints},
author={George T. Amariucai, Shuangqing Wei and Rajgopal Kannan},
journal={arXiv preprint arXiv:0707.0454},
year={2007},
archivePrefix={arXiv},
eprint={0707.0454},
primaryClass={cs.IT math.IT}
} | amariucai2007optimal |
arxiv-652 | 0707.0459 | Physical Network Coding in Two-Way Wireless Relay Channels | <|reference_start|>Physical Network Coding in Two-Way Wireless Relay Channels: It has recently been recognized that the wireless networks represent a fertile ground for devising communication modes based on network coding. A particularly suitable application of the network coding arises for the two--way relay channels, where two nodes communicate with each other assisted by using a third, relay node. Such a scenario enables application of \emph{physical network coding}, where the network coding is either done (a) jointly with the channel coding or (b) through physical combining of the communication flows over the multiple access channel. In this paper we first group the existing schemes for physical network coding into two generic schemes, termed 3--step and 2--step scheme, respectively. We investigate the conditions for maximization of the two--way rate for each individual scheme: (1) the Decode--and--Forward (DF) 3--step schemes (2) three different schemes with two steps: Amplify--and--Forward (AF), JDF and Denoise--and--Forward (DNF). While the DNF scheme has a potential to offer the best two--way rate, the most interesting result of the paper is that, for some SNR configurations of the source--relay links, JDF yields identical maximal two--way rate as the upper bound on the rate for DNF.<|reference_end|> | arxiv | @article{popovski2007physical,
title={Physical Network Coding in Two-Way Wireless Relay Channels},
author={Petar Popovski and Hiroyuki Yomo},
journal={Proc. of IEEE International Conference on Communications (ICC),
Glasgow, Scotland, 2007},
year={2007},
archivePrefix={arXiv},
eprint={0707.0459},
primaryClass={cs.IT cs.NI math.IT}
} | popovski2007physical |
arxiv-653 | 0707.0463 | Blind Estimation of Multiple Carrier Frequency Offsets | <|reference_start|>Blind Estimation of Multiple Carrier Frequency Offsets: Multiple carrier-frequency offsets (CFO) arise in a distributed antenna system, where data are transmitted simultaneously from multiple antennas. In such systems the received signal contains multiple CFOs due to mismatch between the local oscillators of transmitters and receiver. This results in a time-varying rotation of the data constellation, which needs to be compensated for at the receiver before symbol recovery. This paper proposes a new approach for blind CFO estimation and symbol recovery. The received base-band signal is over-sampled, and its polyphase components are used to formulate a virtual Multiple-Input Multiple-Output (MIMO) problem. By applying blind MIMO system estimation techniques, the system response is estimated and used to subsequently transform the multiple CFOs estimation problem into many independent single CFO estimation problems. Furthermore, an initial estimate of the CFO is obtained from the phase of the MIMO system response. The Cramer-Rao Lower bound is also derived, and the large sample performance of the proposed estimator is compared to the bound.<|reference_end|> | arxiv | @article{yu2007blind,
title={Blind Estimation of Multiple Carrier Frequency Offsets},
author={Yuanning Yu, Athina P. Petropulu, H. Vincent Poor and Visa Koivunen},
journal={arXiv preprint arXiv:0707.0463},
year={2007},
doi={10.1109/PIMRC.2007.4394103},
archivePrefix={arXiv},
eprint={0707.0463},
primaryClass={cs.IT math.IT}
} | yu2007blind |
arxiv-654 | 0707.0476 | Fractional Power Control for Decentralized Wireless Networks | <|reference_start|>Fractional Power Control for Decentralized Wireless Networks: We consider a new approach to power control in decentralized wireless networks, termed fractional power control (FPC). Transmission power is chosen as the current channel quality raised to an exponent -s, where s is a constant between 0 and 1. The choices s = 1 and s = 0 correspond to the familiar cases of channel inversion and constant power transmission, respectively. Choosing s in (0,1) allows all intermediate policies between these two extremes to be evaluated, and we see that usually neither extreme is ideal. We derive closed-form approximations for the outage probability relative to a target SINR in a decentralized (ad hoc or unlicensed) network as well as for the resulting transmission capacity, which is the number of users/m^2 that can achieve this SINR on average. Using these approximations, which are quite accurate over typical system parameter values, we prove that using an exponent of 1/2 minimizes the outage probability, meaning that the inverse square root of the channel strength is a sensible transmit power scaling for networks with a relatively low density of interferers. We also show numerically that this choice of s is robust to a wide range of variations in the network parameters. Intuitively, s=1/2 balances between helping disadvantaged users while making sure they do not flood the network with interference.<|reference_end|> | arxiv | @article{jindal2007fractional,
title={Fractional Power Control for Decentralized Wireless Networks},
author={Nihar Jindal, Steven Weber, Jeffrey G. Andrews},
journal={arXiv preprint arXiv:0707.0476},
year={2007},
doi={10.1109/T-WC.2008.071439},
archivePrefix={arXiv},
eprint={0707.0476},
primaryClass={cs.IT math.IT}
} | jindal2007fractional |
arxiv-655 | 0707.0479 | Precoding for the AWGN Channel with Discrete Interference | <|reference_start|>Precoding for the AWGN Channel with Discrete Interference: For a state-dependent DMC with input alphabet $\mathcal{X}$ and state alphabet $\mathcal{S}$ where the i.i.d. state sequence is known causally at the transmitter, it is shown that by using at most $|\mathcal{X}||\mathcal{S}|-|\mathcal{S}|+1$ out of $|\mathcal{X}|^{|\mathcal{S}|}$ input symbols of the Shannon's \emph{associated} channel, the capacity is achievable. As an example of state-dependent channels with side information at the transmitter, $M$-ary signal transmission over AWGN channel with additive $Q$-ary interference where the sequence of i.i.d. interference symbols is known causally at the transmitter is considered. For the special case where the Gaussian noise power is zero, a sufficient condition, which is independent of interference, is given for the capacity to be $\log_2 M$ bits per channel use. The problem of maximization of the transmission rate under the constraint that the channel input given any current interference symbol is uniformly distributed over the channel input alphabet is investigated. For this setting, the general structure of a communication system with optimal precoding is proposed.<|reference_end|> | arxiv | @article{farmanbar2007precoding,
title={Precoding for the AWGN Channel with Discrete Interference},
author={Hamidreza Farmanbar and Amir K. Khandani},
journal={arXiv preprint arXiv:0707.0479},
year={2007},
number={UW-ECE-2006-24},
archivePrefix={arXiv},
eprint={0707.0479},
primaryClass={cs.IT math.IT}
} | farmanbar2007precoding |
arxiv-656 | 0707.0498 | The Role of Time in the Creation of Knowledge | <|reference_start|>The Role of Time in the Creation of Knowledge: This paper I assume that in humans the creation of knowledge depends on a discrete time, or stage, sequential decision-making process subjected to a stochastic, information transmitting environment. For each time-stage, this environment randomly transmits Shannon type information-packets to the decision-maker, who examines each of them for relevancy and then determines his optimal choices. Using this set of relevant information-packets, the decision-maker adapts, over time, to the stochastic nature of his environment, and optimizes the subjective expected rate-of-growth of knowledge. The decision-maker's optimal actions, lead to a decision function that involves, over time, his view of the subjective entropy of the environmental process and other important parameters at each time-stage of the process. Using this model of human behavior, one could create psychometric experiments using computer simulation and real decision-makers, to play programmed games to measure the resulting human performance.<|reference_end|> | arxiv | @article{murphy2007the,
title={The Role of Time in the Creation of Knowledge},
author={Roy E. Murphy},
journal={arXiv preprint arXiv:0707.0498},
year={2007},
archivePrefix={arXiv},
eprint={0707.0498},
primaryClass={cs.LG cs.AI cs.IT math.IT}
} | murphy2007the |
arxiv-657 | 0707.0500 | Location-Aided Fast Distributed Consensus in Wireless Networks | <|reference_start|>Location-Aided Fast Distributed Consensus in Wireless Networks: Existing works on distributed consensus explore linear iterations based on reversible Markov chains, which contribute to the slow convergence of the algorithms. It has been observed that by overcoming the diffusive behavior of reversible chains, certain nonreversible chains lifted from reversible ones mix substantially faster than the original chains. In this paper, we investigate the idea of accelerating distributed consensus via lifting Markov chains, and propose a class of Location-Aided Distributed Averaging (LADA) algorithms for wireless networks, where nodes' coarse location information is used to construct nonreversible chains that facilitate distributed computing and cooperative processing. First, two general pseudo-algorithms are presented to illustrate the notion of distributed averaging through chain-lifting. These pseudo-algorithms are then respectively instantiated through one LADA algorithm on grid networks, and one on general wireless networks. For a $k\times k$ grid network, the proposed LADA algorithm achieves an $\epsilon$-averaging time of $O(k\log(\epsilon^{-1}))$. Based on this algorithm, in a wireless network with transmission range $r$, an $\epsilon$-averaging time of $O(r^{-1}\log(\epsilon^{-1}))$ can be attained through a centralized algorithm. Subsequently, we present a fully-distributed LADA algorithm for wireless networks, which utilizes only the direction information of neighbors to construct nonreversible chains. It is shown that this distributed LADA algorithm achieves the same scaling law in averaging time as the centralized scheme. Finally, we propose a cluster-based LADA (C-LADA) algorithm, which, requiring no central coordination, provides the additional benefit of reduced message complexity compared with the distributed LADA algorithm.<|reference_end|> | arxiv | @article{li2007location-aided,
title={Location-Aided Fast Distributed Consensus in Wireless Networks},
author={Wenjun Li, Yanbing Zhang and Huaiyu Dai},
journal={arXiv preprint arXiv:0707.0500},
year={2007},
doi={10.1109/TIT.2010.2081030},
archivePrefix={arXiv},
eprint={0707.0500},
primaryClass={cs.IT math.IT}
} | li2007location-aided |
arxiv-658 | 0707.0514 | Phase space methods and psychoacoustic models in lossy transform coding | <|reference_start|>Phase space methods and psychoacoustic models in lossy transform coding: I present a method for lossy transform coding of digital audio that uses the Weyl symbol calculus for constructing the encoding and decoding transformation. The method establishes a direct connection between a time-frequency representation of the signal dependent threshold of masked noise and the encode/decode pair. The formalism also offers a time-frequency measure of perceptual entropy.<|reference_end|> | arxiv | @article{cargo2007phase,
title={Phase space methods and psychoacoustic models in lossy transform coding},
author={Matthew Charles Cargo},
journal={arXiv preprint arXiv:0707.0514},
year={2007},
archivePrefix={arXiv},
eprint={0707.0514},
primaryClass={cs.IT cs.SD math.IT}
} | cargo2007phase |
arxiv-659 | 0707.0546 | Weighted Popular Matchings | <|reference_start|>Weighted Popular Matchings: We study the problem of assigning jobs to applicants. Each applicant has a weight and provides a preference list ranking a subset of the jobs. A matching M is popular if there is no other matching M' such that the weight of the applicants who prefer M' over M exceeds the weight of those who prefer M over M'. This paper gives efficient algorithms to find a popular matching if one exists.<|reference_end|> | arxiv | @article{mestre2007weighted,
title={Weighted Popular Matchings},
author={Juli'an Mestre},
journal={arXiv preprint arXiv:0707.0546},
year={2007},
archivePrefix={arXiv},
eprint={0707.0546},
primaryClass={cs.DS}
} | mestre2007weighted |
arxiv-660 | 0707.0548 | From Royal Road to Epistatic Road for Variable Length Evolution Algorithm | <|reference_start|>From Royal Road to Epistatic Road for Variable Length Evolution Algorithm: Although there are some real world applications where the use of variable length representation (VLR) in Evolutionary Algorithm is natural and suitable, an academic framework is lacking for such representations. In this work we propose a family of tunable fitness landscapes based on VLR of genotypes. The fitness landscapes we propose possess a tunable degree of both neutrality and epistasis; they are inspired, on the one hand by the Royal Road fitness landscapes, and the other hand by the NK fitness landscapes. So these landscapes offer a scale of continuity from Royal Road functions, with neutrality and no epistasis, to landscapes with a large amount of epistasis and no redundancy. To gain insight into these fitness landscapes, we first use standard tools such as adaptive walks and correlation length. Second, we evaluate the performances of evolutionary algorithms on these landscapes for various values of the neutral and the epistatic parameters; the results allow us to correlate the performances with the expected degrees of neutrality and epistasis.<|reference_end|> | arxiv | @article{platel2007from,
title={From Royal Road to Epistatic Road for Variable Length Evolution
Algorithm},
author={Michael Defoin Platel (I3S), Sebastien Verel (I3S), Manuel Clergue
(I3S), Philippe Collard (I3S)},
journal={Lecture notes in computer science (Lect. notes comput. sci.) ISSN
0302-9743 (27/10/2003) 3-14},
year={2007},
archivePrefix={arXiv},
eprint={0707.0548},
primaryClass={cs.NE}
} | platel2007from |
arxiv-661 | 0707.0556 | Determinacy in a synchronous pi-calculus | <|reference_start|>Determinacy in a synchronous pi-calculus: The S-pi-calculus is a synchronous pi-calculus which is based on the SL model. The latter is a relaxation of the Esterel model where the reaction to the absence of a signal within an instant can only happen at the next instant. In the present work, we present and characterise a compositional semantics of the S-pi-calculus based on suitable notions of labelled transition system and bisimulation. Based on this semantic framework, we explore the notion of determinacy and the related one of (local) confluence.<|reference_end|> | arxiv | @article{amadio2007determinacy,
title={Determinacy in a synchronous pi-calculus},
author={Roberto Amadio (PPS), Mehdi Dogguy (PPS)},
journal={From semantics to computer science: essays in honor of Gilles
Kahn, Y. Bertot et al. (Ed.) (2009) 1-27},
year={2007},
archivePrefix={arXiv},
eprint={0707.0556},
primaryClass={cs.LO}
} | amadio2007determinacy |
arxiv-662 | 0707.0562 | On a Non-Context-Free Extension of PDL | <|reference_start|>On a Non-Context-Free Extension of PDL: Over the last 25 years, a lot of work has been done on seeking for decidable non-regular extensions of Propositional Dynamic Logic (PDL). Only recently, an expressive extension of PDL, allowing visibly pushdown automata (VPAs) as a formalism to describe programs, was introduced and proven to have a satisfiability problem complete for deterministic double exponential time. Lately, the VPA formalism was extended to so called k-phase multi-stack visibly pushdown automata (k-MVPAs). Similarly to VPAs, it has been shown that the language of k-MVPAs have desirable effective closure properties and that the emptiness problem is decidable. On the occasion of introducing k-MVPAs, it has been asked whether the extension of PDL with k-MVPAs still leads to a decidable logic. This question is answered negatively here. We prove that already for the extension of PDL with 2-phase MVPAs with two stacks satisfiability becomes \Sigma_1^1-complete.<|reference_end|> | arxiv | @article{göller2007on,
title={On a Non-Context-Free Extension of PDL},
author={Stefan G"oller and Dirk Nowotka},
journal={arXiv preprint arXiv:0707.0562},
year={2007},
archivePrefix={arXiv},
eprint={0707.0562},
primaryClass={cs.LO}
} | göller2007on |
arxiv-663 | 0707.0568 | Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems based on Game Theory-Part I: Nash Equilibria | <|reference_start|>Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems based on Game Theory-Part I: Nash Equilibria: In this two-parts paper we propose a decentralized strategy, based on a game-theoretic formulation, to find out the optimal precoding/multiplexing matrices for a multipoint-to-multipoint communication system composed of a set of wideband links sharing the same physical resources, i.e., time and bandwidth. We assume, as optimality criterion, the achievement of a Nash equilibrium and consider two alternative optimization problems: 1) the competitive maximization of mutual information on each link, given constraints on the transmit power and on the spectral mask imposed by the radio spectrum regulatory bodies; and 2) the competitive maximization of the transmission rate, using finite order constellations, under the same constraints as above, plus a constraint on the average error probability. In Part I of the paper, we start by showing that the solution set of both noncooperative games is always nonempty and contains only pure strategies. Then, we prove that the optimal precoding/multiplexing scheme for both games leads to a channel diagonalizing structure, so that both matrix-valued problems can be recast in a simpler unified vector power control game, with no performance penalty. Thus, we study this simpler game and derive sufficient conditions ensuring the uniqueness of the Nash equilibrium. Interestingly, although derived under stronger constraints, incorporating for example spectral mask constraints, our uniqueness conditions have broader validity than previously known conditions. Finally, we assess the goodness of the proposed decentralized strategy by comparing its performance with the performance of a Pareto-optimal centralized scheme. To reach the Nash equilibria of the game, in Part II, we propose alternative distributed algorithms, along with their convergence conditions.<|reference_end|> | arxiv | @article{scutari2007optimal,
title={Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems
based on Game Theory-Part I: Nash Equilibria},
author={Gesualdo Scutari, D.P. Palomar, S. Barbarossa},
journal={arXiv preprint arXiv:0707.0568},
year={2007},
doi={10.1109/TSP.2007.907807},
archivePrefix={arXiv},
eprint={0707.0568},
primaryClass={cs.IT cs.GT math.IT}
} | scutari2007optimal |
arxiv-664 | 0707.0610 | Unfolding Orthogonal Terrains | <|reference_start|>Unfolding Orthogonal Terrains: It is shown that every orthogonal terrain, i.e., an orthogonal (right-angled) polyhedron based on a rectangle that meets every vertical line in a segment, has a grid unfolding: its surface may be unfolded to a single non-overlapping piece by cutting along grid edges defined by coordinate planes through every vertex.<|reference_end|> | arxiv | @article{o'rourke2007unfolding,
title={Unfolding Orthogonal Terrains},
author={Joseph O'Rourke},
journal={arXiv preprint arXiv:0707.0610},
year={2007},
number={Smith Technical Report 084},
archivePrefix={arXiv},
eprint={0707.0610},
primaryClass={cs.CG}
} | o'rourke2007unfolding |
arxiv-665 | 0707.0641 | Where are Bottlenecks in NK Fitness Landscapes? | <|reference_start|>Where are Bottlenecks in NK Fitness Landscapes?: Usually the offspring-parent fitness correlation is used to visualize and analyze some caracteristics of fitness landscapes such as evolvability. In this paper, we introduce a more general representation of this correlation, the Fitness Cloud (FC). We use the bottleneck metaphor to emphasise fitness levels in landscape that cause local search process to slow down. For a local search heuristic such as hill-climbing or simulated annealing, FC allows to visualize bottleneck and neutrality of landscapes. To confirm the relevance of the FC representation we show where the bottlenecks are in the well-know NK fitness landscape and also how to use neutrality information from the FC to combine some neutral operator with local search heuristic.<|reference_end|> | arxiv | @article{verel2007where,
title={Where are Bottlenecks in NK Fitness Landscapes?},
author={S'ebastien Verel (I3S), Philippe Collard (I3S), Manuel Clergue (I3S)},
journal={Evolutionary Computation, 2003. CEC'03 (08/12/2003) 273--280},
year={2007},
doi={10.1109/CEC.2003.1299585},
archivePrefix={arXiv},
eprint={0707.0641},
primaryClass={cs.NE}
} | verel2007where |
arxiv-666 | 0707.0643 | Scuba Search : when selection meets innovation | <|reference_start|>Scuba Search : when selection meets innovation: We proposed a new search heuristic using the scuba diving metaphor. This approach is based on the concept of evolvability and tends to exploit neutrality in fitness landscape. Despite the fact that natural evolution does not directly select for evolvability, the basic idea behind the scuba search heuristic is to explicitly push the evolvability to increase. The search process switches between two phases: Conquest-of-the-Waters and Invasion-of-the-Land. A comparative study of the new algorithm and standard local search heuristics on the NKq-landscapes has shown advantage and limit of the scuba search. To enlighten qualitative differences between neutral search processes, the space is changed into a connected graph to visualize the pathways that the search is likely to follow.<|reference_end|> | arxiv | @article{verel2007scuba,
title={Scuba Search : when selection meets innovation},
author={S'ebastien Verel (I3S), Philippe Collard (I3S), Manuel Clergue (I3S)},
journal={Evolutionary Computation, 2004. CEC2004 (23/06/2004) 924 - 931},
year={2007},
doi={10.1109/CEC.2004.1330960},
archivePrefix={arXiv},
eprint={0707.0643},
primaryClass={cs.NE}
} | verel2007scuba |
arxiv-667 | 0707.0644 | Another view of the Gaussian algorithm | <|reference_start|>Another view of the Gaussian algorithm: We introduce here a rewrite system in the group of unimodular matrices, \emph{i.e.}, matrices with integer entries and with determinant equal to $\pm 1$. We use this rewrite system to precisely characterize the mechanism of the Gaussian algorithm, that finds shortest vectors in a two--dimensional lattice given by any basis. Putting together the algorithmic of lattice reduction and the rewrite system theory, we propose a new worst--case analysis of the Gaussian algorithm. There is already an optimal worst--case bound for some variant of the Gaussian algorithm due to Vall\'ee \cite {ValGaussRevisit}. She used essentially geometric considerations. Our analysis generalizes her result to the case of the usual Gaussian algorithm. An interesting point in our work is its possible (but not easy) generalization to the same problem in higher dimensions, in order to exhibit a tight upper-bound for the number of iterations of LLL--like reduction algorithms in the worst case. Moreover, our method seems to work for analyzing other families of algorithms. As an illustration, the analysis of sorting algorithms are briefly developed in the last section of the paper.<|reference_end|> | arxiv | @article{akhavi2007another,
title={Another view of the Gaussian algorithm},
author={Ali Akhavi (GREYC), C'eline Moreira (GREYC)},
journal={Proceedings of Latin'04 (04/2004) 474--487},
year={2007},
archivePrefix={arXiv},
eprint={0707.0644},
primaryClass={cs.DS cs.DM}
} | akhavi2007another |
arxiv-668 | 0707.0648 | Dial a Ride from k-forest | <|reference_start|>Dial a Ride from k-forest: The k-forest problem is a common generalization of both the k-MST and the dense-$k$-subgraph problems. Formally, given a metric space on $n$ vertices $V$, with $m$ demand pairs $\subseteq V \times V$ and a ``target'' $k\le m$, the goal is to find a minimum cost subgraph that connects at least $k$ demand pairs. In this paper, we give an $O(\min\{\sqrt{n},\sqrt{k}\})$-approximation algorithm for $k$-forest, improving on the previous best ratio of $O(n^{2/3}\log n)$ by Segev & Segev. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an $n$ point metric space with $m$ objects each with its own source and destination, and a vehicle capable of carrying at most $k$ objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an $\alpha$-approximation algorithm for the $k$-forest problem implies an $O(\alpha\cdot\log^2n)$-approximation algorithm for Dial-a-Ride. Using our results for $k$-forest, we get an $O(\min\{\sqrt{n},\sqrt{k}\}\cdot\log^2 n)$- approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an $O(\sqrt{k}\log n)$-approximation by Charikar & Raghavachari; our results give a different proof of a similar approximation guarantee--in fact, when the vehicle capacity $k$ is large, we give a slight improvement on their results.<|reference_end|> | arxiv | @article{gupta2007dial,
title={Dial a Ride from k-forest},
author={Anupam Gupta, MohammadTaghi Hajiaghayi, Viswanath Nagarajan, R. Ravi},
journal={arXiv preprint arXiv:0707.0648},
year={2007},
archivePrefix={arXiv},
eprint={0707.0648},
primaryClass={cs.DS}
} | gupta2007dial |
arxiv-669 | 0707.0649 | Sphere Lower Bound for Rotated Lattice Constellations in Fading Channels | <|reference_start|>Sphere Lower Bound for Rotated Lattice Constellations in Fading Channels: We study the error probability performance of rotated lattice constellations in frequency-flat Nakagami-$m$ block-fading channels. In particular, we use the sphere lower bound on the underlying infinite lattice as a performance benchmark. We show that the sphere lower bound has full diversity. We observe that optimally rotated lattices with largest known minimum product distance perform very close to the lower bound, while the ensemble of random rotations is shown to lack diversity and perform far from it.<|reference_end|> | arxiv | @article{fabregas2007sphere,
title={Sphere Lower Bound for Rotated Lattice Constellations in Fading Channels},
author={Albert Guillen i Fabregas and Emanuele Viterbo},
journal={arXiv preprint arXiv:0707.0649},
year={2007},
archivePrefix={arXiv},
eprint={0707.0649},
primaryClass={cs.IT math.IT}
} | fabregas2007sphere |
arxiv-670 | 0707.0652 | How to use the Scuba Diving metaphor to solve problem with neutrality ? | <|reference_start|>How to use the Scuba Diving metaphor to solve problem with neutrality ?: We proposed a new search heuristic using the scuba diving metaphor. This approach is based on the concept of evolvability and tends to exploit neutrality which exists in many real-world problems. Despite the fact that natural evolution does not directly select for evolvability, the basic idea behind the scuba search heuristic is to explicitly push evolvability to increase. A comparative study of the scuba algorithm and standard local search heuristics has shown the advantage and the limitation of the scuba search. In order to tune neutrality, we use the NKq fitness landscapes and a family of travelling salesman problems (TSP) where cities are randomly placed on a lattice and where travel distance between cities is computed with the Manhattan metric. In this last problem the amount of neutrality varies with the city concentration on the grid ; assuming the concentration below one, this TSP reasonably remains a NP-hard problem.<|reference_end|> | arxiv | @article{collard2007how,
title={How to use the Scuba Diving metaphor to solve problem with neutrality ?},
author={Philippe Collard (I3S), S'ebastien Verel (I3S), Manuel Clergue (I3S)},
journal={ECAI'2004 (27/08/2004) 166-170},
year={2007},
archivePrefix={arXiv},
eprint={0707.0652},
primaryClass={cs.NE}
} | collard2007how |
arxiv-671 | 0707.0701 | Clustering and Feature Selection using Sparse Principal Component Analysis | <|reference_start|>Clustering and Feature Selection using Sparse Principal Component Analysis: In this paper, we study the application of sparse principal component analysis (PCA) to clustering and feature selection problems. Sparse PCA seeks sparse factors, or linear combinations of the data variables, explaining a maximum amount of variance in the data while having only a limited number of nonzero coefficients. PCA is often used as a simple clustering technique and sparse factors allow us here to interpret the clusters in terms of a reduced set of variables. We begin with a brief introduction and motivation on sparse PCA and detail our implementation of the algorithm in d'Aspremont et al. (2005). We then apply these results to some classic clustering and feature selection problems arising in biology.<|reference_end|> | arxiv | @article{luss2007clustering,
title={Clustering and Feature Selection using Sparse Principal Component
Analysis},
author={Ronny Luss, Alexandre d'Aspremont},
journal={arXiv preprint arXiv:0707.0701},
year={2007},
archivePrefix={arXiv},
eprint={0707.0701},
primaryClass={cs.AI cs.LG cs.MS}
} | luss2007clustering |
arxiv-672 | 0707.0704 | Model Selection Through Sparse Maximum Likelihood Estimation | <|reference_start|>Model Selection Through Sparse Maximum Likelihood Estimation: We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for the binary case. We test our algorithms on synthetic data, as well as on gene expression and senate voting records data.<|reference_end|> | arxiv | @article{banerjee2007model,
title={Model Selection Through Sparse Maximum Likelihood Estimation},
author={Onureena Banerjee, Laurent El Ghaoui, Alexandre d'Aspremont},
journal={arXiv preprint arXiv:0707.0704},
year={2007},
archivePrefix={arXiv},
eprint={0707.0704},
primaryClass={cs.AI cs.LG}
} | banerjee2007model |
arxiv-673 | 0707.0705 | Optimal Solutions for Sparse Principal Component Analysis | <|reference_start|>Optimal Solutions for Sparse Principal Component Analysis: Given a sample covariance matrix, we examine the problem of maximizing the variance explained by a linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This is known as sparse principal component analysis and has a wide array of applications in machine learning and engineering. We formulate a new semidefinite relaxation to this problem and derive a greedy algorithm that computes a full set of good solutions for all target numbers of non zero coefficients, with total complexity O(n^3), where n is the number of variables. We then use the same relaxation to derive sufficient conditions for global optimality of a solution, which can be tested in O(n^3) per pattern. We discuss applications in subset selection and sparse recovery and show on artificial examples and biological data that our algorithm does provide globally optimal solutions in many cases.<|reference_end|> | arxiv | @article{d'aspremont2007optimal,
title={Optimal Solutions for Sparse Principal Component Analysis},
author={Alexandre d'Aspremont, Francis Bach, Laurent El Ghaoui},
journal={arXiv preprint arXiv:0707.0705},
year={2007},
archivePrefix={arXiv},
eprint={0707.0705},
primaryClass={cs.AI cs.LG}
} | d'aspremont2007optimal |
arxiv-674 | 0707.0724 | Workspace Analysis of the Parallel Module of the VERNE Machine | <|reference_start|>Workspace Analysis of the Parallel Module of the VERNE Machine: The paper addresses geometric aspects of a spatial three-degree-of-freedom parallel module, which is the parallel module of a hybrid serial-parallel 5-axis machine tool. This parallel module consists of a moving platform that is connected to a fixed base by three non-identical legs. Each leg is made up of one prismatic and two pairs of spherical joint, which are connected in a way that the combined effects of the three legs lead to an over-constrained mechanism with complex motion. This motion is defined as a simultaneous combination of rotation and translation. A method for computing the complete workspace of the VERNE parallel module for various tool lengths is presented. An algorithm describing this method is also introduced.<|reference_end|> | arxiv | @article{kanaan2007workspace,
title={Workspace Analysis of the Parallel Module of the VERNE Machine},
author={Daniel Kanaan (IRCCyN), Philippe Wenger (IRCCyN), Damien Chablat
(IRCCyN)},
journal={Problems of Mechanics 25, 4 (01/12/2006) 26-42},
year={2007},
archivePrefix={arXiv},
eprint={0707.0724},
primaryClass={cs.RO physics.class-ph}
} | kanaan2007workspace |
arxiv-675 | 0707.0740 | A Multi Interface Grid Discovery System | <|reference_start|>A Multi Interface Grid Discovery System: Discovery Systems (DS) can be considered as entry points for global loosely coupled distributed systems. An efficient Discovery System in essence increases the performance, reliability and decision making capability of distributed systems. With the rapid increase in scale of distributed applications, existing solutions for discovery systems are fast becoming either obsolete or incapable of handling such complexity. They are particularly ineffective when handling service lifetimes and providing up-to-date information, poor at enabling dynamic service access and they can also impose unwanted restrictions on interfaces to widely available information repositories. In this paper we present essential the design characteristics, an implementation and a performance analysis for a discovery system capable of overcoming these deficiencies in large, globally distributed environments.<|reference_end|> | arxiv | @article{ali2007a,
title={A Multi Interface Grid Discovery System},
author={A. Ali, A. Anjum, J. Bunn, F. Khan, R.McClatchey, H. Newman, C.
Steenberg, M. Thomas, Ian Willers},
journal={arXiv preprint arXiv:0707.0740},
year={2007},
archivePrefix={arXiv},
eprint={0707.0740},
primaryClass={cs.DC}
} | ali2007a |
arxiv-676 | 0707.0742 | Mobile Computing in Physics Analysis - An Indicator for eScience | <|reference_start|>Mobile Computing in Physics Analysis - An Indicator for eScience: This paper presents the design and implementation of a Grid-enabled physics analysis environment for handheld and other resource-limited computing devices as one example of the use of mobile devices in eScience. Handheld devices offer great potential because they provide ubiquitous access to data and round-the-clock connectivity over wireless links. Our solution aims to provide users of handheld devices the capability to launch heavy computational tasks on computational and data Grids, monitor the jobs status during execution, and retrieve results after job completion. Users carry their jobs on their handheld devices in the form of executables (and associated libraries). Users can transparently view the status of their jobs and get back their outputs without having to know where they are being executed. In this way, our system is able to act as a high-throughput computing environment where devices ranging from powerful desktop machines to small handhelds can employ the power of the Grid. The results shown in this paper are readily applicable to the wider eScience community.<|reference_end|> | arxiv | @article{ali2007mobile,
title={Mobile Computing in Physics Analysis - An Indicator for eScience},
author={A. Ali, A. Anjum, T. Azim, J. Bunn, A. Ikram, R. McClatchey, H.
Newman, C. Steenberg, M. Thomas, I. Willers},
journal={arXiv preprint arXiv:0707.0742},
year={2007},
archivePrefix={arXiv},
eprint={0707.0742},
primaryClass={cs.DC}
} | ali2007mobile |
arxiv-677 | 0707.0743 | DIANA Scheduling Hierarchies for Optimizing Bulk Job Scheduling | <|reference_start|>DIANA Scheduling Hierarchies for Optimizing Bulk Job Scheduling: The use of meta-schedulers for resource management in large-scale distributed systems often leads to a hierarchy of schedulers. In this paper, we discuss why existing meta-scheduling hierarchies are sometimes not sufficient for Grid systems due to their inability to re-organise jobs already scheduled locally. Such a job re-organisation is required to adapt to evolving loads which are common in heavily used Grid infrastructures. We propose a peer-to-peer scheduling model and evaluate it using case studies and mathematical modelling. We detail the DIANA (Data Intensive and Network Aware) scheduling algorithm and its queue management system for coping with the load distribution and for supporting bulk job scheduling. We demonstrate that such a system is beneficial for dynamic, distributed and self-organizing resource management and can assist in optimizing load or job distribution in complex Grid infrastructures.<|reference_end|> | arxiv | @article{anjum2007diana,
title={DIANA Scheduling Hierarchies for Optimizing Bulk Job Scheduling},
author={A. Anjum, R. McClatchey, H. Stockinger, A. Ali, I. Willers, M. Thomas,
M. Sagheer, K. Hasham, O. Alvi},
journal={arXiv preprint arXiv:0707.0743},
year={2007},
doi={10.1109/E-SCIENCE.2006.261173},
archivePrefix={arXiv},
eprint={0707.0743},
primaryClass={cs.DC}
} | anjum2007diana |
arxiv-678 | 0707.0744 | A process algebra based framework for promise theory | <|reference_start|>A process algebra based framework for promise theory: We present a process algebra based approach to formalize the interactions of computing devices such as the representation of policies and the resolution of conflicts. As an example we specify how promises may be used in coming to an agreement regarding a simple though practical transportation problem.<|reference_end|> | arxiv | @article{bergstra2007a,
title={A process algebra based framework for promise theory},
author={Jan Bergstra, Inge Bethke and Mark Burgess},
journal={arXiv preprint arXiv:0707.0744},
year={2007},
number={PRG0701},
archivePrefix={arXiv},
eprint={0707.0744},
primaryClass={cs.LO}
} | bergstra2007a |
arxiv-679 | 0707.0745 | Semantic Information Retrieval from Distributed Heterogeneous Data Sources | <|reference_start|>Semantic Information Retrieval from Distributed Heterogeneous Data Sources: Information retrieval from distributed heterogeneous data sources remains a challenging issue. As the number of data sources increases more intelligent retrieval techniques, focusing on information content and semantics, are required. Currently ontologies are being widely used for managing semantic knowledge, especially in the field of bioinformatics. In this paper we describe an ontology assisted system that allows users to query distributed heterogeneous data sources by hiding details like location, information structure, access pattern and semantic structure of the data. Our goal is to provide an integrated view on biomedical information sources for the Health-e-Child project with the aim to overcome the lack of sufficient semantic-based reformulation techniques for querying distributed data sources. In particular, this paper examines the problem of query reformulation across biomedical data sources, based on merged ontologies and the underlying heterogeneous descriptions of the respective data sources.<|reference_end|> | arxiv | @article{munir2007semantic,
title={Semantic Information Retrieval from Distributed Heterogeneous Data
Sources},
author={K. Munir, M. Odeh, R. McClatchey, S. Khan, I. Habib},
journal={arXiv preprint arXiv:0707.0745},
year={2007},
archivePrefix={arXiv},
eprint={0707.0745},
primaryClass={cs.DB}
} | munir2007semantic |
arxiv-680 | 0707.0748 | Experiences of Engineering Grid-Based Medical Software | <|reference_start|>Experiences of Engineering Grid-Based Medical Software: Objectives: Grid-based technologies are emerging as potential solutions for managing and collaborating distributed resources in the biomedical domain. Few examples exist, however, of successful implementations of Grid-enabled medical systems and even fewer have been deployed for evaluation in practice. The objective of this paper is to evaluate the use in clinical practice of a Grid-based imaging prototype and to establish directions for engineering future medical Grid developments and their subsequent deployment. Method: The MammoGrid project has deployed a prototype system for clinicians using the Grid as its information infrastructure. To assist in the specification of the system requirements (and for the first time in healthgrid applications), use-case modelling has been carried out in close collaboration with clinicians and radiologists who had no prior experience of this modelling technique. A critical qualitative and, where possible, quantitative analysis of the MammoGrid prototype is presented leading to a set of recommendations from the delivery of the first deployed Grid-based medical imaging application. Results: We report critically on the application of software engineering techniques in the specification and implementation of the MammoGrid project and show that use-case modelling is a suitable vehicle for representing medical requirements and for communicating effectively with the clinical community. This paper also discusses the practical advantages and limitations of applying the Grid to real-life clinical applications and presents the consequent lessons learned.<|reference_end|> | arxiv | @article{estrella2007experiences,
title={Experiences of Engineering Grid-Based Medical Software},
author={F. Estrella, T. Hauer, R. McClatchey, M. Odeh, D Rogulin, T.
Solomonides},
journal={arXiv preprint arXiv:0707.0748},
year={2007},
archivePrefix={arXiv},
eprint={0707.0748},
primaryClass={cs.DC}
} | estrella2007experiences |
arxiv-681 | 0707.0761 | Managing Separation of Concerns in Grid Applications Through Architectural Model Transformations | <|reference_start|>Managing Separation of Concerns in Grid Applications Through Architectural Model Transformations: Grids enable the aggregation, virtualization and sharing of massive heterogeneous and geographically dispersed resources, using files, applications and storage devices, to solve computation and data intensive problems, across institutions and countries via temporary collaborations called virtual organizations (VO). Most implementations result in complex superposition of software layers, often delivering low quality of service and quality of applications. As a consequence, Grid-based applications design and development is increasingly complex, and the use of most classical engineering practices is unsuccessful. Not only is the development of such applications a time-consuming, error prone and expensive task, but also the resulting applications are often hard-coded for specific Grid configurations, platforms and infra-structures. Having neither guidelines nor rules in the design of a Grid-based application is a paradox since there are many existing architectural approaches for distributed computing, which could ease and promote rigorous engineering methods based on the re-use of software components. It is our belief that ad-hoc and semi-formal engineer-ing approaches, in current use, are insufficient to tackle tomorrows Grid develop-ments requirements. Because Grid-based applications address multi-disciplinary and complex domains (health, military, scientific computation), their engineering requires rigor and control. This paper therefore advocates a formal model-driven engineering process and corresponding design framework and tools for building the next generation of Grids.<|reference_end|> | arxiv | @article{manset2007managing,
title={Managing Separation of Concerns in Grid Applications Through
Architectural Model Transformations},
author={David Manset, Herve Verjus, Richard McClatchey},
journal={arXiv preprint arXiv:0707.0761},
year={2007},
archivePrefix={arXiv},
eprint={0707.0761},
primaryClass={cs.SE cs.DC}
} | manset2007managing |
arxiv-682 | 0707.0762 | PhantomOS: A Next Generation Grid Operating System | <|reference_start|>PhantomOS: A Next Generation Grid Operating System: Grid Computing has made substantial advances in the past decade; these are primarily due to the adoption of standardized Grid middleware. However Grid computing has not yet become pervasive because of some barriers that we believe have been caused by the adoption of middleware centric approaches. These barriers include: scant support for major types of applications such as interactive applications; lack of flexible, autonomic and scalable Grid architectures; lack of plug-and-play Grid computing and, most importantly, no straightforward way to setup and administer Grids. PhantomOS is a project which aims to address many of these barriers. Its goal is the creation of a user friendly pervasive Grid computing platform that facilitates the rapid deployment and easy maintenance of Grids whilst providing support for major types of applications on Grids of almost any topology. In this paper we present the detailed system architecture and an overview of its implementation.<|reference_end|> | arxiv | @article{habib2007phantomos:,
title={PhantomOS: A Next Generation Grid Operating System},
author={Irfan Habib, Kamran Soomro, Ashiq Anjum, Richard McClatchey, Arshad
Ali, Peter Bloodsworth},
journal={arXiv preprint arXiv:0707.0762},
year={2007},
archivePrefix={arXiv},
eprint={0707.0762},
primaryClass={cs.DC}
} | habib2007phantomos: |
arxiv-683 | 0707.0763 | The Requirements for Ontologies in Medical Data Integration: A Case Study | <|reference_start|>The Requirements for Ontologies in Medical Data Integration: A Case Study: Evidence-based medicine is critically dependent on three sources of information: a medical knowledge base, the patients medical record and knowledge of available resources, including where appropriate, clinical protocols. Patient data is often scattered in a variety of databases and may, in a distributed model, be held across several disparate repositories. Consequently addressing the needs of an evidence-based medicine community presents issues of biomedical data integration, clinical interpretation and knowledge management. This paper outlines how the Health-e-Child project has approached the challenge of requirements specification for (bio-) medical data integration, from the level of cellular data, through disease to that of patient and population. The approach is illuminated through the requirements elicitation and analysis of Juvenile Idiopathic Arthritis (JIA), one of three diseases being studied in the EC-funded Health-e-Child project.<|reference_end|> | arxiv | @article{anjum2007the,
title={The Requirements for Ontologies in Medical Data Integration: A Case
Study},
author={Ashiq Anjum, Peter Bloodsworth, Andrew Branson, Tamas Hauer, Richard
McClatchey, Kamran Munir, Dmitry Rogulin, Jetendr Shamdasani},
journal={arXiv preprint arXiv:0707.0763},
year={2007},
archivePrefix={arXiv},
eprint={0707.0763},
primaryClass={cs.DB}
} | anjum2007the |
arxiv-684 | 0707.0764 | p-Adic Degeneracy of the Genetic Code | <|reference_start|>p-Adic Degeneracy of the Genetic Code: Degeneracy of the genetic code is a biological way to minimize effects of the undesirable mutation changes. Degeneration has a natural description on the 5-adic space of 64 codons $\mathcal{C}_5 (64) = \{n_0 + n_1 5 + n_2 5^2 : n_i = 1, 2, 3, 4 \} ,$ where $n_i$ are digits related to nucleotides as follows: C = 1, A = 2, T = U = 3, G = 4. The smallest 5-adic distance between codons joins them into 16 quadruplets, which under 2-adic distance decay into 32 doublets. p-Adically close codons are assigned to one of 20 amino acids, which are building blocks of proteins, or code termination of protein synthesis. We shown that genetic code multiplets are made of the p-adic nearest codons.<|reference_end|> | arxiv | @article{dragovich2007p-adic,
title={p-Adic Degeneracy of the Genetic Code},
author={Branko Dragovich and Alexandra Dragovich},
journal={SFIN XX A1 (2007) 179-188},
year={2007},
archivePrefix={arXiv},
eprint={0707.0764},
primaryClass={q-bio.GN cs.IT math.IT physics.bio-ph}
} | dragovich2007p-adic |
arxiv-685 | 0707.0785 | Garside monoids vs divisibility monoids | <|reference_start|>Garside monoids vs divisibility monoids: Divisibility monoids (resp. Garside monoids) are a natural algebraic generalization of Mazurkiewicz trace monoids (resp. spherical Artin monoids), namely monoids in which the distributivity of the underlying lattices (resp. the existence of common multiples) is kept as an hypothesis, but the relations between the generators are not supposed to necessarily be commutations (resp. be of Coxeter type). Here, we show that the quasi-center of these monoids can be studied and described similarly, and then we exhibit the intersection between the two classes of monoids.<|reference_end|> | arxiv | @article{picantin2007garside,
title={Garside monoids vs divisibility monoids},
author={Matthieu Picantin (LIAFA)},
journal={arXiv preprint arXiv:0707.0785},
year={2007},
archivePrefix={arXiv},
eprint={0707.0785},
primaryClass={math.GR cs.DM}
} | picantin2007garside |
arxiv-686 | 0707.0796 | Performance of Linear Field Reconstruction Techniques with Noise and Uncertain Sensor Locations | <|reference_start|>Performance of Linear Field Reconstruction Techniques with Noise and Uncertain Sensor Locations: We consider a wireless sensor network, sampling a bandlimited field, described by a limited number of harmonics. Sensor nodes are irregularly deployed over the area of interest or subject to random motion; in addition sensors measurements are affected by noise. Our goal is to obtain a high quality reconstruction of the field, with the mean square error (MSE) of the estimate as performance metric. In particular, we analytically derive the performance of several reconstruction/estimation techniques based on linear filtering. For each technique, we obtain the MSE, as well as its asymptotic expression in the case where the field number of harmonics and the number of sensors grow to infinity, while their ratio is kept constant. Through numerical simulations, we show the validity of the asymptotic analysis, even for a small number of sensors. We provide some novel guidelines for the design of sensor networks when many parameters, such as field bandwidth, number of sensors, reconstruction quality, sensor motion characteristics, and noise level of the measures, have to be traded off.<|reference_end|> | arxiv | @article{nordio2007performance,
title={Performance of Linear Field Reconstruction Techniques with Noise and
Uncertain Sensor Locations},
author={A. Nordio, C.-F. Chiasserini, E. Viterbo},
journal={arXiv preprint arXiv:0707.0796},
year={2007},
doi={10.1109/TSP.2008.924865},
archivePrefix={arXiv},
eprint={0707.0796},
primaryClass={cs.OH}
} | nordio2007performance |
arxiv-687 | 0707.0799 | A New Family of Unitary Space-Time Codes with a Fast Parallel Sphere Decoder Algorithm | <|reference_start|>A New Family of Unitary Space-Time Codes with a Fast Parallel Sphere Decoder Algorithm: In this paper we propose a new design criterion and a new class of unitary signal constellations for differential space-time modulation for multiple-antenna systems over Rayleigh flat-fading channels with unknown fading coefficients. Extensive simulations show that the new codes have significantly better performance than existing codes. We have compared the performance of our codes with differential detection schemes using orthogonal design, Cayley differential codes, fixed-point-free group codes and product of groups and for the same bit error rate, our codes allow smaller signal to noise ratio by as much as 10 dB. The design of the new codes is accomplished in a systematic way through the optimization of a performance index that closely describes the bit error rate as a function of the signal to noise ratio. The new performance index is computationally simple and we have derived analytical expressions for its gradient with respect to constellation parameters. Decoding of the proposed constellations is reduced to a set of one-dimensional closest point problems that we solve using parallel sphere decoder algorithms. This decoding strategy can also improve efficiency of existing codes.<|reference_end|> | arxiv | @article{chen2007a,
title={A New Family of Unitary Space-Time Codes with a Fast Parallel Sphere
Decoder Algorithm},
author={Xinjia Chen, Kemin Zhou and Jorge Aravena},
journal={IEEE Transactions on Information Theory, vol. 52, pp. 115-140,
January 2006},
year={2007},
archivePrefix={arXiv},
eprint={0707.0799},
primaryClass={cs.IT math.IT}
} | chen2007a |
arxiv-688 | 0707.0802 | Very fast watermarking by reversible contrast mapping | <|reference_start|>Very fast watermarking by reversible contrast mapping: Reversible contrast mapping (RCM) is a simple integer transform that applies to pairs of pixels. For some pairs of pixels, RCM is invertible, even if the least significant bits (LSBs) of the transformed pixels are lost. The data space occupied by the LSBs is suitable for data hiding. The embedded information bit-rates of the proposed spatial domain reversible watermarking scheme are close to the highest bit-rates reported so far. The scheme does not need additional data compression, and, in terms of mathematical complexity, it appears to be the lowest complexity one proposed up to now. A very fast lookup table implementation is proposed. Robustness against cropping can be ensured as well.<|reference_end|> | arxiv | @article{coltuc2007very,
title={Very fast watermarking by reversible contrast mapping},
author={Dinu Coltuc, Jean-Marc Chassery (GIPSA-lab)},
journal={IEEE Signal Processing Letters 14, 4 (04/2007) pp 255-258},
year={2007},
doi={10.1109/LSP.2006.884895},
archivePrefix={arXiv},
eprint={0707.0802},
primaryClass={cs.MM cs.CR cs.CV cs.IT math.IT}
} | coltuc2007very |
arxiv-689 | 0707.0805 | A New Generalization of Chebyshev Inequality for Random Vectors | <|reference_start|>A New Generalization of Chebyshev Inequality for Random Vectors: In this article, we derive a new generalization of Chebyshev inequality for random vectors. We demonstrate that the new generalization is much less conservative than the classical generalization.<|reference_end|> | arxiv | @article{chen2007a,
title={A New Generalization of Chebyshev Inequality for Random Vectors},
author={Xinjia Chen},
journal={arXiv preprint arXiv:0707.0805},
year={2007},
archivePrefix={arXiv},
eprint={0707.0805},
primaryClass={math.ST cs.LG math.PR stat.AP stat.TH}
} | chen2007a |
arxiv-690 | 0707.0808 | The Cyborg Astrobiologist: Porting from a wearable computer to the Astrobiology Phone-cam | <|reference_start|>The Cyborg Astrobiologist: Porting from a wearable computer to the Astrobiology Phone-cam: We have used a simple camera phone to significantly improve an `exploration system' for astrobiology and geology. This camera phone will make it much easier to develop and test computer-vision algorithms for future planetary exploration. We envision that the `Astrobiology Phone-cam' exploration system can be fruitfully used in other problem domains as well.<|reference_end|> | arxiv | @article{bartolo2007the,
title={The Cyborg Astrobiologist: Porting from a wearable computer to the
Astrobiology Phone-cam},
author={Alexandra Bartolo, Patrick C. McGuire, Kenneth P. Camilleri,
Christopher Spiteri, Jonathan C. Borg, Philip J. Farrugia, Jens Ormo, Javier
Gomez-Elvira, Jose Antonio Rodriguez-Manfredi, Enrique Diaz-Martinez, Helge
Ritter, Robert Haschke, Markus Oesker, Joerg Ontrup},
journal={International Journal of Astrobiology, vol. 6, issue 4, pp.
255-261 (2007)},
year={2007},
doi={10.1017/S1473550407003862},
archivePrefix={arXiv},
eprint={0707.0808},
primaryClass={cs.CV astro-ph cs.AI cs.CE cs.HC cs.NI cs.RO cs.SE}
} | bartolo2007the |
arxiv-691 | 0707.0860 | On the Minimum Number of Transmissions in Single-Hop Wireless Coding Networks | <|reference_start|>On the Minimum Number of Transmissions in Single-Hop Wireless Coding Networks: The advent of network coding presents promising opportunities in many areas of communication and networking. It has been recently shown that network coding technique can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. In wireless networks, each transmitted packet is broadcasted within a certain area and can be overheard by the neighboring nodes. When a node needs to transmit packets, it employs the opportunistic coding approach that uses the knowledge of what the node's neighbors have heard in order to reduce the number of transmissions. With this approach, each transmitted packet is a linear combination of the original packets over a certain finite field. In this paper, we focus on the fundamental problem of finding the optimal encoding for the broadcasted packets that minimizes the overall number of transmissions. We show that this problem is NP-complete over GF(2) and establish several fundamental properties of the optimal solution. We also propose a simple heuristic solution for the problem based on graph coloring and present some empirical results for random settings.<|reference_end|> | arxiv | @article{rouayheb2007on,
title={On the Minimum Number of Transmissions in Single-Hop Wireless Coding
Networks},
author={Salim Y. El Rouayheb, Mohammad Asad R. Chaudhry, and Alex Sprintson},
journal={arXiv preprint arXiv:0707.0860},
year={2007},
archivePrefix={arXiv},
eprint={0707.0860},
primaryClass={cs.IT cs.NI math.IT}
} | rouayheb2007on |
arxiv-692 | 0707.0862 | Scheduling in Data Intensive and Network Aware (DIANA) Grid Environments | <|reference_start|>Scheduling in Data Intensive and Network Aware (DIANA) Grid Environments: In Grids scheduling decisions are often made on the basis of jobs being either data or computation intensive: in data intensive situations jobs may be pushed to the data and in computation intensive situations data may be pulled to the jobs. This kind of scheduling, in which there is no consideration of network characteristics, can lead to performance degradation in a Grid environment and may result in large processing queues and job execution delays due to site overloads. In this paper we describe a Data Intensive and Network Aware (DIANA) meta-scheduling approach, which takes into account data, processing power and network characteristics when making scheduling decisions across multiple sites. Through a practical implementation on a Grid testbed, we demonstrate that queue and execution times of data-intensive jobs can be significantly improved when we introduce our proposed DIANA scheduler. The basic scheduling decisions are dictated by a weighting factor for each potential target location which is a calculated function of network characteristics, processing cycles and data location and size. The job scheduler provides a global ranking of the computing resources and then selects an optimal one on the basis of this overall access and execution cost. The DIANA approach considers the Grid as a combination of active network elements and takes network characteristics as a first class criterion in the scheduling decision matrix along with computation and data. The scheduler can then make informed decisions by taking into account the changing state of the network, locality and size of the data and the pool of available processing cycles.<|reference_end|> | arxiv | @article{mcclatchey2007scheduling,
title={Scheduling in Data Intensive and Network Aware (DIANA) Grid Environments},
author={Richard McClatchey, Ashiq Anjum, Heinz Stockinger, Arshad Ali, Ian
Willers, Michael Thomas},
journal={arXiv preprint arXiv:0707.0862},
year={2007},
archivePrefix={arXiv},
eprint={0707.0862},
primaryClass={cs.DC}
} | mcclatchey2007scheduling |
arxiv-693 | 0707.0871 | Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems based on Game Theory-Part II: Algorithms | <|reference_start|>Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems based on Game Theory-Part II: Algorithms: In this two-part paper, we address the problem of finding the optimal precoding/multiplexing scheme for a set of non-cooperative links sharing the same physical resources, e.g., time and bandwidth. We consider two alternative optimization problems: P.1) the maximization of mutual information on each link, given constraints on the transmit power and spectral mask; and P.2) the maximization of the transmission rate on each link, using finite order constellations, under the same constraints as in P.1, plus a constraint on the maximum average error probability on each link. Aiming at finding decentralized strategies, we adopted as optimality criterion the achievement of a Nash equilibrium and thus we formulated both problems P.1 and P.2 as strategic noncooperative (matrix-valued) games. In Part I of this two-part paper, after deriving the optimal structure of the linear transceivers for both games, we provided a unified set of sufficient conditions that guarantee the uniqueness of the Nash equilibrium. In this Part II, we focus on the achievement of the equilibrium and propose alternative distributed iterative algorithms that solve both games. Specifically, the new proposed algorithms are the following: 1) the sequential and simultaneous iterative waterfilling based algorithms, incorporating spectral mask constraints; 2) the sequential and simultaneous gradient projection based algorithms, establishing an interesting link with variational inequality problems. Our main contribution is to provide sufficient conditions for the global convergence of all the proposed algorithms which, although derived under stronger constraints, incorporating for example spectral mask constraints, have a broader validity than the convergence conditions known in the current literature for the sequential iterative waterfilling algorithm.<|reference_end|> | arxiv | @article{scutari2007optimal,
title={Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems
based on Game Theory-Part II: Algorithms},
author={Gesualdo Scutari, Daniel P. Palomar, and Sergio Barbarossa},
journal={arXiv preprint arXiv:0707.0871},
year={2007},
doi={10.1109/TSP.2007.907808},
archivePrefix={arXiv},
eprint={0707.0871},
primaryClass={cs.IT cs.GT math.IT}
} | scutari2007optimal |
arxiv-694 | 0707.0878 | Risk Analysis in Robust Control -- Making the Case for Probabilistic Robust Control | <|reference_start|>Risk Analysis in Robust Control -- Making the Case for Probabilistic Robust Control: This paper offers a critical view of the "worst-case" approach that is the cornerstone of robust control design. It is our contention that a blind acceptance of worst-case scenarios may lead to designs that are actually more dangerous than designs based on probabilistic techniques with a built-in risk factor. The real issue is one of modeling. If one accepts that no mathematical model of uncertainties is perfect then a probabilistic approach can lead to more reliable control even if it cannot guarantee stability for all possible cases. Our presentation is based on case analysis. We first establish that worst-case is not necessarily "all-encompassing." In fact, we show that for some uncertain control problems to have a conventional robust control solution it is necessary to make assumptions that leave out some feasible cases. Once we establish that point, we argue that it is not uncommon for the risk of unaccounted cases in worst-case design to be greater than that of the accepted risk in a probabilistic approach. With an example, we quantify the risks and show that worst-case can be significantly more risky. Finally, we join our analysis with existing results on computational complexity and probabilistic robustness to argue that the deterministic worst-case analysis is not necessarily the better tool.<|reference_end|> | arxiv | @article{chen2007risk,
title={Risk Analysis in Robust Control -- Making the Case for Probabilistic
Robust Control},
author={Xinjia Chen, Jorge Aravena and Kemin Zhou},
journal={Proceedings of American Control Conference, pp. 1533-1538,
Portland, June 2005.},
year={2007},
archivePrefix={arXiv},
eprint={0707.0878},
primaryClass={math.OC cs.SY math.ST stat.TH}
} | chen2007risk |
arxiv-695 | 0707.0890 | Are there Hilbert-style Pure Type Systems? | <|reference_start|>Are there Hilbert-style Pure Type Systems?: For many a natural deduction style logic there is a Hilbert-style logic that is equivalent to it in that it has the same theorems (i.e. valid judgements with empty contexts). For intuitionistic logic, the axioms of the equivalent Hilbert-style logic can be propositions which are also known as the types of the combinators I, K and S. Hilbert-style versions of illative combinatory logic have formulations with axioms that are actual type statements for I, K and S. As pure type systems (PTSs)are, in a sense, equivalent to systems of illative combinatory logic, it might be thought that Hilbert-style PTSs (HPTSs) could be based in a similar way. This paper shows that some PTSs have very trivial equivalent HPTSs, with only the axioms as theorems and that for many PTSs no equivalent HPTS can exist. Most commonly used PTSs belong to these two classes. For some PTSs however, including lambda* and the PTS at the basis of the proof assistant Coq, there is a nontrivial equivalent HPTS, with axioms that are type statements for I, K and S.<|reference_end|> | arxiv | @article{bunder2007are,
title={Are there Hilbert-style Pure Type Systems?},
author={M. W. Bunder and W. M. J.Dekkers},
journal={Logical Methods in Computer Science, Volume 4, Issue 1 (January 7,
2008) lmcs:839},
year={2007},
doi={10.2168/LMCS-4(1:1)2008},
archivePrefix={arXiv},
eprint={0707.0890},
primaryClass={cs.LO}
} | bunder2007are |
arxiv-696 | 0707.0891 | The Nash Equilibrium Revisited: Chaos and Complexity Hidden in Simplicity | <|reference_start|>The Nash Equilibrium Revisited: Chaos and Complexity Hidden in Simplicity: The Nash Equilibrium is a much discussed, deceptively complex, method for the analysis of non-cooperative games. If one reads many of the commonly available definitions the description of the Nash Equilibrium is deceptively simple in appearance. Modern research has discovered a number of new and important complex properties of the Nash Equilibrium, some of which remain as contemporary conundrums of extraordinary difficulty and complexity. Among the recently discovered features which the Nash Equilibrium exhibits under various conditions are heteroclinic Hamiltonian dynamics, a very complex asymptotic structure in the context of two-player bi-matrix games and a number of computationally complex or computationally intractable features in other settings. This paper reviews those findings and then suggests how they may inform various market prediction strategies.<|reference_end|> | arxiv | @article{fellman2007the,
title={The Nash Equilibrium Revisited: Chaos and Complexity Hidden in
Simplicity},
author={Philip V. Fellman},
journal={InterJournal Complex Systems, 1013, 2004.
http://www.interjournal.org/},
year={2007},
archivePrefix={arXiv},
eprint={0707.0891},
primaryClass={cs.GT cs.CC}
} | fellman2007the |
arxiv-697 | 0707.0895 | Segmentation and Context of Literary and Musical Sequences | <|reference_start|>Segmentation and Context of Literary and Musical Sequences: We test a segmentation algorithm, based on the calculation of the Jensen-Shannon divergence between probability distributions, to two symbolic sequences of literary and musical origin. The first sequence represents the successive appearance of characters in a theatrical play, and the second represents the succession of tones from the twelve-tone scale in a keyboard sonata. The algorithm divides the sequences into segments of maximal compositional divergence between them. For the play, these segments are related to changes in the frequency of appearance of different characters and in the geographical setting of the action. For the sonata, the segments correspond to tonal domains and reveal in detail the characteristic tonal progression of such kind of musical composition.<|reference_end|> | arxiv | @article{zanette2007segmentation,
title={Segmentation and Context of Literary and Musical Sequences},
author={Damian H. Zanette},
journal={arXiv preprint arXiv:0707.0895},
year={2007},
archivePrefix={arXiv},
eprint={0707.0895},
primaryClass={cs.CL physics.data-an}
} | zanette2007segmentation |
arxiv-698 | 0707.0909 | Spectrum Sensing in Cognitive Radios Based on Multiple Cyclic Frequencies | <|reference_start|>Spectrum Sensing in Cognitive Radios Based on Multiple Cyclic Frequencies: Cognitive radios sense the radio spectrum in order to find unused frequency bands and use them in an agile manner. Transmission by the primary user must be detected reliably even in the low signal-to-noise ratio (SNR) regime and in the face of shadowing and fading. Communication signals are typically cyclostationary, and have many periodic statistical properties related to the symbol rate, the coding and modulation schemes as well as the guard periods, for example. These properties can be exploited in designing a detector, and for distinguishing between the primary and secondary users' signals. In this paper, a generalized likelihood ratio test (GLRT) for detecting the presence of cyclostationarity using multiple cyclic frequencies is proposed. Distributed decision making is employed by combining the quantized local test statistics from many secondary users. User cooperation allows for mitigating the effects of shadowing and provides a larger footprint for the cognitive radio system. Simulation examples demonstrate the resulting performance gains in the low SNR regime and the benefits of cooperative detection.<|reference_end|> | arxiv | @article{lundén2007spectrum,
title={Spectrum Sensing in Cognitive Radios Based on Multiple Cyclic
Frequencies},
author={Jarmo Lund'en, Visa Koivunen, Anu Huttunen, H. Vincent Poor},
journal={arXiv preprint arXiv:0707.0909},
year={2007},
archivePrefix={arXiv},
eprint={0707.0909},
primaryClass={cs.IT math.IT}
} | lundén2007spectrum |
arxiv-699 | 0707.0926 | Theorem proving support in programming language semantics | <|reference_start|>Theorem proving support in programming language semantics: We describe several views of the semantics of a simple programming language as formal documents in the calculus of inductive constructions that can be verified by the Coq proof system. Covered aspects are natural semantics, denotational semantics, axiomatic semantics, and abstract interpretation. Descriptions as recursive functions are also provided whenever suitable, thus yielding a a verification condition generator and a static analyser that can be run inside the theorem prover for use in reflective proofs. Extraction of an interpreter from the denotational semantics is also described. All different aspects are formally proved sound with respect to the natural semantics specification.<|reference_end|> | arxiv | @article{bertot2007theorem,
title={Theorem proving support in programming language semantics},
author={Yves Bertot (INRIA Sophia Antipolis)},
journal={arXiv preprint arXiv:0707.0926},
year={2007},
archivePrefix={arXiv},
eprint={0707.0926},
primaryClass={cs.LO cs.PL}
} | bertot2007theorem |
arxiv-700 | 0707.0969 | Resource Allocation for Wireless Fading Relay Channels: Max-Min Solution | <|reference_start|>Resource Allocation for Wireless Fading Relay Channels: Max-Min Solution: As a basic information-theoretic model for fading relay channels, the parallel relay channel is first studied, for which lower and upper bounds on the capacity are derived. For the parallel relay channel with degraded subchannels, the capacity is established, and is further demonstrated via the Gaussian case, for which the synchronized and asynchronized capacities are obtained. The capacity achieving power allocation at the source and relay nodes among the subchannels is characterized. The fading relay channel is then studied, for which resource allocations that maximize the achievable rates are obtained for both the full-duplex and half-duplex cases. Capacities are established for fading relay channels that satisfy certain conditions.<|reference_end|> | arxiv | @article{liang2007resource,
title={Resource Allocation for Wireless Fading Relay Channels: Max-Min Solution},
author={Yingbin Liang, Venugopal V. Veeravalli, H. Vincent Poor},
journal={arXiv preprint arXiv:0707.0969},
year={2007},
archivePrefix={arXiv},
eprint={0707.0969},
primaryClass={cs.IT math.IT}
} | liang2007resource |