corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-301 | 0705.1336 | Diversity-Multiplexing Tradeoff via Asymptotic Analysis of Large MIMO Systems | <|reference_start|>Diversity-Multiplexing Tradeoff via Asymptotic Analysis of Large MIMO Systems: Diversity-multiplexing tradeoff (DMT) presents a compact framework to compare various MIMO systems and channels in terms of the two main advantages they provide (i.e. high data rate and/or low error rate). This tradeoff was characterized asymptotically (SNR-> infinity) for i.i.d. Rayleigh fading channel by Zheng and Tse [1]. The asymptotic DMT overestimates the finite-SNR one [2]. In this paper, using the recent results on the asymptotic (in the number of antennas) outage capacity distribution, we derive and analyze the finite-SNR DMT for a broad class of channels (not necessarily Rayleigh fading). Based on this, we give the convergence conditions for the asymptotic DMT to be approached by the finite-SNR one. The multiplexing gain definition is shown to affect critically the convergence point: when the multiplexing gain is defined via the mean (ergodic) capacity, the convergence takes place at realistic SNR values. Furthermore, in this case the diversity gain can also be used to estimate the outage probability with reasonable accuracy. The multiplexing gain definition via the high-SNR asymptote of the mean capacity (as in [1]) results in very slow convergence for moderate to large systems (as 1/ln(SNR)^2) and, hence, the asymptotic DMT cannot be used at realistic SNR values. For this definition, the high-SNR threshold increases exponentially in the number of antennas and in the multiplexing gain. For correlated keyhole channel, the diversity gain is shown to decrease with correlation and power imbalance of the channel. While the SNR-asymptotic DMT of Zheng and Tse does not capture this effect, the size-asymptotic DMT does.<|reference_end|> | arxiv | @article{loyka2007diversity-multiplexing,
title={Diversity-Multiplexing Tradeoff via Asymptotic Analysis of Large MIMO
Systems},
author={Sergey Loyka, George Levin},
journal={arXiv preprint arXiv:0705.1336},
year={2007},
doi={10.1109/ISIT.2007.4557189},
archivePrefix={arXiv},
eprint={0705.1336},
primaryClass={cs.IT math.IT}
} | loyka2007diversity-multiplexing |
arxiv-302 | 0705.1340 | On Optimum Power Allocation for the V-BLAST | <|reference_start|>On Optimum Power Allocation for the V-BLAST: A unified analytical framework for optimum power allocation in the unordered V-BLAST algorithm and its comparative performance analysis are presented. Compact closed-form approximations for the optimum power allocation are derived, based on average total and block error rates. The choice of the criterion has little impact on the power allocation and, overall, the optimum strategy is to allocate more power to lower step transmitters and less to higher ones. High-SNR approximations for optimized average block and total error rates are given. The SNR gain of optimization is rigorously defined and studied using analytical tools, including lower and upper bounds, high and low SNR approximations. The gain is upper bounded by the number of transmitters, for any modulation format and type of fading channel. While the average optimization is less complex than the instantaneous one, its performance is almost as good at high SNR. A measure of robustness of the optimized algorithm is introduced and evaluated. The optimized algorithm is shown to be robust to perturbations in individual and total transmit powers. Based on the algorithm robustness, a pre-set power allocation is suggested as a low-complexity alternative to the other optimization strategies, which exhibits only a minor loss in performance over the practical SNR range.<|reference_end|> | arxiv | @article{kostina2007on,
title={On Optimum Power Allocation for the V-BLAST},
author={Victoria Kostina, Sergey Loyka},
journal={arXiv preprint arXiv:0705.1340},
year={2007},
doi={10.1109/TCOMM.2008.060517},
archivePrefix={arXiv},
eprint={0705.1340},
primaryClass={cs.IT math.IT}
} | kostina2007on |
arxiv-303 | 0705.1343 | The Optimal Design of Three Degree-of-Freedom Parallel Mechanisms for Machining Applications | <|reference_start|>The Optimal Design of Three Degree-of-Freedom Parallel Mechanisms for Machining Applications: The subject of this paper is the optimal design of a parallel mechanism intended for three-axis machining applications. Parallel mechanisms are interesting alternative designs in this context but most of them are designed for three- or six-axis machining applications. In the last case, the position and the orientation of the tool are coupled and the shape of the workspace is complex. The aim of this paper is to use a simple parallel mechanism with two-degree-of-freedom (dof) for translational motions and to add one leg to have one-dof rotational motion. The kinematics and singular configurations are studied as well as an optimization method. The three-degree-of-freedom mechanisms analyzed in this paper can be extended to four-axis machines by adding a fourth axis in series with the first two.<|reference_end|> | arxiv | @article{chablat2007the,
title={The Optimal Design of Three Degree-of-Freedom Parallel Mechanisms for
Machining Applications},
author={Damien Chablat (IRCCyN), Philippe Wenger (IRCCyN), F'elix Majou
(IRCCyN)},
journal={The 11Th International Conference on Advanced Robotics (2003) 1-6},
year={2007},
archivePrefix={arXiv},
eprint={0705.1343},
primaryClass={cs.RO}
} | chablat2007the |
arxiv-304 | 0705.1344 | Classification of one family of 3R positioning Manipulators | <|reference_start|>Classification of one family of 3R positioning Manipulators: The aim of this paper is to classify one family of 3R serial positioning manipulators. This categorization is based on the number of cusp points of quaternary, binary, generic and non-generic manipulators. It was found three subsets of manipulators with 0, 2 or 4 cusp points and one homotopy class for generic quaternary manipulators. This classification allows us to define the design parameters for which the manipulator is cuspidal or not, i.e., for which the manipulator can or cannot change posture without meeting a singularity, respectively.<|reference_end|> | arxiv | @article{baili2007classification,
title={Classification of one family of 3R positioning Manipulators},
author={Maher Baili (IRCCyN), Philippe Wenger (IRCCyN), Damien Chablat
(IRCCyN)},
journal={The 11Th International Conference on Advanced Robotics (2003) 1-6},
year={2007},
archivePrefix={arXiv},
eprint={0705.1344},
primaryClass={cs.RO}
} | baili2007classification |
arxiv-305 | 0705.1345 | Degree Optimization and Stability Condition for the Min-Sum Decoder | <|reference_start|>Degree Optimization and Stability Condition for the Min-Sum Decoder: The min-sum (MS) algorithm is arguably the second most fundamental algorithm in the realm of message passing due to its optimality (for a tree code) with respect to the {\em block error} probability \cite{Wiberg}. There also seems to be a fundamental relationship of MS decoding with the linear programming decoder \cite{Koetter}. Despite its importance, its fundamental properties have not nearly been studied as well as those of the sum-product (also known as BP) algorithm. We address two questions related to the MS rule. First, we characterize the stability condition under MS decoding. It turns out to be essentially the same condition as under BP decoding. Second, we perform a degree distribution optimization. Contrary to the case of BP decoding, under MS decoding the thresholds of the best degree distributions for standard irregular LDPC ensembles are significantly bounded away from the Shannon threshold. More precisely, on the AWGN channel, for the best codes that we find, the gap to capacity is 1dB for a rate 0.3 code and it is 0.4dB when the rate is 0.9 (the gap decreases monotonically as we increase the rate). We also used the optimization procedure to design codes for modified MS algorithm where the output of the check node is scaled by a constant $1/\alpha$. For $\alpha = 1.25$, we observed that the gap to capacity was lesser for the modified MS algorithm when compared with the MS algorithm. However, it was still quite large, varying from 0.75 dB to 0.2 dB for rates between 0.3 and 0.9. We conclude by posing what we consider to be the most important open questions related to the MS algorithm.<|reference_end|> | arxiv | @article{bhattad2007degree,
title={Degree Optimization and Stability Condition for the Min-Sum Decoder},
author={Kapil Bhattad, Vishwambhar Rathi, Ruediger Urbanke},
journal={arXiv preprint arXiv:0705.1345},
year={2007},
archivePrefix={arXiv},
eprint={0705.1345},
primaryClass={cs.IT math.IT}
} | bhattad2007degree |
arxiv-306 | 0705.1364 | An Approximation Algorithm for Shortest Descending Paths | <|reference_start|>An Approximation Algorithm for Shortest Descending Paths: A path from s to t on a polyhedral terrain is descending if the height of a point p never increases while we move p along the path from s to t. No efficient algorithm is known to find a shortest descending path (SDP) from s to t in a polyhedral terrain. We give a simple approximation algorithm that solves the SDP problem on general terrains. Our algorithm discretizes the terrain with O(n^2 X / e) Steiner points so that after an O(n^2 X / e * log(n X /e))-time preprocessing phase for a given vertex s, we can determine a (1+e)-approximate SDP from s to any point v in O(n) time if v is either a vertex of the terrain or a Steiner point, and in O(n X /e) time otherwise. Here n is the size of the terrain, and X is a parameter of the geometry of the terrain.<|reference_end|> | arxiv | @article{ahmed2007an,
title={An Approximation Algorithm for Shortest Descending Paths},
author={Mustaq Ahmed and Anna Lubiw},
journal={arXiv preprint arXiv:0705.1364},
year={2007},
number={CS-2007-14},
archivePrefix={arXiv},
eprint={0705.1364},
primaryClass={cs.CG cs.DS}
} | ahmed2007an |
arxiv-307 | 0705.1367 | Logic Column 18: Alternative Logics: A Book Review | <|reference_start|>Logic Column 18: Alternative Logics: A Book Review: This article discusses two books on the topic of alternative logics in science: "Deviant Logic", by Susan Haack, and "Alternative Logics: Do Sciences Need Them?", edited by Paul Weingartner.<|reference_end|> | arxiv | @article{pucella2007logic,
title={Logic Column 18: Alternative Logics: A Book Review},
author={Riccardo Pucella},
journal={arXiv preprint arXiv:0705.1367},
year={2007},
archivePrefix={arXiv},
eprint={0705.1367},
primaryClass={cs.LO}
} | pucella2007logic |
arxiv-308 | 0705.1384 | Matroid Pathwidth and Code Trellis Complexity | <|reference_start|>Matroid Pathwidth and Code Trellis Complexity: We relate the notion of matroid pathwidth to the minimum trellis state-complexity (which we term trellis-width) of a linear code, and to the pathwidth of a graph. By reducing from the problem of computing the pathwidth of a graph, we show that the problem of determining the pathwidth of a representable matroid is NP-hard. Consequently, the problem of computing the trellis-width of a linear code is also NP-hard. For a finite field $\F$, we also consider the class of $\F$-representable matroids of pathwidth at most $w$, and correspondingly, the family of linear codes over $\F$ with trellis-width at most $w$. These are easily seen to be minor-closed. Since these matroids (and codes) have branchwidth at most $w$, a result of Geelen and Whittle shows that such matroids (and the corresponding codes) are characterized by finitely many excluded minors. We provide the complete list of excluded minors for $w=1$, and give a partial list for $w=2$.<|reference_end|> | arxiv | @article{kashyap2007matroid,
title={Matroid Pathwidth and Code Trellis Complexity},
author={Navin Kashyap},
journal={arXiv preprint arXiv:0705.1384},
year={2007},
archivePrefix={arXiv},
eprint={0705.1384},
primaryClass={cs.DM cs.IT math.IT}
} | kashyap2007matroid |
arxiv-309 | 0705.1390 | Machine and Component Residual Life Estimation through the Application of Neural Networks | <|reference_start|>Machine and Component Residual Life Estimation through the Application of Neural Networks: This paper concerns the use of neural networks for predicting the residual life of machines and components. In addition, the advantage of using condition-monitoring data to enhance the predictive capability of these neural networks was also investigated. A number of neural network variations were trained and tested with the data of two different reliability-related datasets. The first dataset represents the renewal case where the failed unit is repaired and restored to a good-as-new condition. Data was collected in the laboratory by subjecting a series of similar test pieces to fatigue loading with a hydraulic actuator. The average prediction error of the various neural networks being compared varied from 431 to 841 seconds on this dataset, where test pieces had a characteristic life of 8,971 seconds. The second dataset was collected from a group of pumps used to circulate a water and magnetite solution within a plant. The data therefore originated from a repaired system affected by reliability degradation. When optimized, the multi-layer perceptron neural networks trained with the Levenberg-Marquardt algorithm and the general regression neural network produced a sum-of-squares error within 11.1% of each other. The potential for using neural networks for residual life prediction and the advantage of incorporating condition-based data into the model were proven for both examples.<|reference_end|> | arxiv | @article{herzog2007machine,
title={Machine and Component Residual Life Estimation through the Application
of Neural Networks},
author={M.A. Herzog, T. Marwala and P.S. Heyns},
journal={arXiv preprint arXiv:0705.1390},
year={2007},
archivePrefix={arXiv},
eprint={0705.1390},
primaryClass={cs.CE}
} | herzog2007machine |
arxiv-310 | 0705.1394 | The Orthoglide: Kinematics and Workspace Analysis | <|reference_start|>The Orthoglide: Kinematics and Workspace Analysis: The paper addresses kinematic and geometrical aspects of the Orthoglide, a three-DOF parallel mechanism. This machine consists of three fixed linear joints, which are mounted orthogonally, three identical legs and a mobile platform, which moves in the Cartesian x-y-z space with fixed orientation. New solutions to solve inverse/direct kinematics are proposed and a detailed workspace analysis is performed taking into account specific joint limit constraints.<|reference_end|> | arxiv | @article{pashkevich2007the,
title={The Orthoglide: Kinematics and Workspace Analysis},
author={Anatoly Pashkevich (Robotic Laboratory), Damien Chablat (IRCCyN),
Philippe Wenger (IRCCyN)},
journal={9th International Symposium on Advances in Robot Kinematics (2004)
1-10},
year={2007},
archivePrefix={arXiv},
eprint={0705.1394},
primaryClass={cs.RO}
} | pashkevich2007the |
arxiv-311 | 0705.1395 | Subjective Evaluation of Forms in an Immersive Environment | <|reference_start|>Subjective Evaluation of Forms in an Immersive Environment: User's perception of product, by essence subjective, is a major topic in marketing and industrial design. Many methods, based on users' tests, are used so as to characterise this perception. We are interested in three main methods: multidimensional scaling, semantic differential method, and preference mapping. These methods are used to built a perceptual space, in order to position the new product, to specify requirements by the study of user's preferences, to evaluate some product attributes, related in particular to style (aesthetic). These early stages of the design are primordial for a good orientation of the project. In parallel, virtual reality tools and interfaces are more and more efficient for suggesting to the user complex feelings, and creating in this way various levels of perceptions. In this article, we present on an example the use of multidimensional scaling, semantic differential method and preference mapping for the subjective assessment of virtual products. These products, which geometrical form is variable, are defined with a CAD model and are proposed to the user with a spacemouse and stereoscopic glasses. Advantages and limitations of such evaluation is next discussed..<|reference_end|> | arxiv | @article{petiot2007subjective,
title={Subjective Evaluation of Forms in an Immersive Environment},
author={Jean-Franc{c}ois Petiot (IRCCyN), Damien Chablat (IRCCyN)},
journal={Virtual Concept (2003) 1-6},
year={2007},
archivePrefix={arXiv},
eprint={0705.1395},
primaryClass={cs.HC cs.RO}
} | petiot2007subjective |
arxiv-312 | 0705.1397 | Realistic Rendering of Kinetostatic Indices of Mechanisms | <|reference_start|>Realistic Rendering of Kinetostatic Indices of Mechanisms: The work presented in this paper is related to the use of a haptic device in an environment of robotic simulation. Such device introduces a new approach to feel and to understand the boundaries of the workspace of mechanisms as well as its kinetostatic properties. Indeed, these concepts are abstract and thus often difficult to understand for the end-users. To catch his attention, we propose to amplify the problems of the mechanisms in order to help him to take the good decisions.<|reference_end|> | arxiv | @article{chablat2007realistic,
title={Realistic Rendering of Kinetostatic Indices of Mechanisms},
author={Damien Chablat (IRCCyN), Fouad Bennis (IRCCyN)},
journal={Virtual Concept (2003) 1-8},
year={2007},
archivePrefix={arXiv},
eprint={0705.1397},
primaryClass={cs.RO}
} | chablat2007realistic |
arxiv-313 | 0705.1399 | A New Concept of Modular Parallel Mechanism for Machining Applications | <|reference_start|>A New Concept of Modular Parallel Mechanism for Machining Applications: The subject of this paper is the design of a new concept of modular parallel mechanisms for three, four or five-axis machining applications. Most parallel mechanisms are designed for three- or six-axis machining applications. In the last case, the position and the orientation of the tool are coupled and the shape of the workspace is complex. The aim of this paper is to use a simple parallel mechanism with two-degree-of-freedom (dof) for translation motions and to add one or two legs to add one or two-dofs for rotation motions. The kinematics and singular configurations are studied for each mechanism.<|reference_end|> | arxiv | @article{chablat2007a,
title={A New Concept of Modular Parallel Mechanism for Machining Applications},
author={Damien Chablat (IRCCyN), Philippe Wenger (IRCCyN)},
journal={Proceeding IEEE International Conference on Robotics and
Automation (2003) 1-6},
year={2007},
archivePrefix={arXiv},
eprint={0705.1399},
primaryClass={cs.RO}
} | chablat2007a |
arxiv-314 | 0705.1400 | A Workspace based Classification of 3R Orthogonal Manipulators | <|reference_start|>A Workspace based Classification of 3R Orthogonal Manipulators: A classification of a family of 3-revolute (3R) positioning manipulators is established. This classification is based on the topology of their workspace. The workspace is characterized in a half-cross section by the singular curves of the manipulator. The workspace topology is defined by the number of cusps and nodes that appear on these singular curves. The design parameters space is shown to be partitioned into nine subspaces of distinct workspace topologies. Each separating surface is given as an explicit expression in the DH-parameters.<|reference_end|> | arxiv | @article{wenger2007a,
title={A Workspace based Classification of 3R Orthogonal Manipulators},
author={Philippe Wenger (IRCCyN), Maher Baili (IRCCyN), Damien Chablat
(IRCCyN)},
journal={9th International Symposium on Advances in Robot Kinematics (2004)
1-10},
year={2007},
archivePrefix={arXiv},
eprint={0705.1400},
primaryClass={cs.RO}
} | wenger2007a |
arxiv-315 | 0705.1409 | Singularity Surfaces and Maximal Singularity-Free Boxes in the Joint Space of Planar 3-RPR Parallel Manipulators | <|reference_start|>Singularity Surfaces and Maximal Singularity-Free Boxes in the Joint Space of Planar 3-RPR Parallel Manipulators: In this paper, a method to compute joint space singularity surfaces of 3-RPR planar parallel manipulators is first presented. Then, a procedure to determine maximal joint space singularity-free boxes is introduced. Numerical examples are given in order to illustrate graphically the results. This study is of high interest for planning trajectories in the joint space of 3-RPR parallel manipulators and for manipulators design as it may constitute a tool for choosing appropriate joint limits and thus for sizing the link lengths of the manipulator.<|reference_end|> | arxiv | @article{zein2007singularity,
title={Singularity Surfaces and Maximal Singularity-Free Boxes in the Joint
Space of Planar 3-RPR Parallel Manipulators},
author={Mazen Zein (IRCCyN), Philippe Wenger (IRCCyN), Damien Chablat (IRCCyN)},
journal={12th World Congress in Mechanism and Machine Science (18/06/2007)
1-6},
year={2007},
archivePrefix={arXiv},
eprint={0705.1409},
primaryClass={cs.RO}
} | zein2007singularity |
arxiv-316 | 0705.1410 | Kinematics analysis of the parallel module of the VERNE machine | <|reference_start|>Kinematics analysis of the parallel module of the VERNE machine: The paper derives the inverse and forward kinematic equations of a spatial three-degree-of-freedom parallel mechanism, which is the parallel module of a hybrid serial-parallel 5-axis machine tool. This parallel mechanism consists of a moving platform that is connected to a fixed base by three non-identical legs. Each leg is made up of one prismatic and two pair spherical joint, which are connected in a way that the combined effects of the three legs lead to an over-constrained mechanism with complex motion. This motion is defined as a simultaneous combination of rotation and translation.<|reference_end|> | arxiv | @article{kanaan2007kinematics,
title={Kinematics analysis of the parallel module of the VERNE machine},
author={Daniel Kanaan (IRCCyN), Philippe Wenger (IRCCyN), Damien Chablat
(IRCCyN)},
journal={12th World Congress in Mechanism and Machine Science (18/06/2007)
1-6},
year={2007},
archivePrefix={arXiv},
eprint={0705.1410},
primaryClass={cs.RO}
} | kanaan2007kinematics |
arxiv-317 | 0705.1442 | Does P=NP? | <|reference_start|>Does P=NP?: This paper has been withdrawn Abstract: This paper has been withdrawn by the author due to the publication.<|reference_end|> | arxiv | @article{gharibyan2007does,
title={Does P=NP?},
author={Karlen Garnik Gharibyan},
journal={Karlen Gharibyan, Does P=NP?, in Proceedings of the first
international Arm Tech Congress 2007},
year={2007},
archivePrefix={arXiv},
eprint={0705.1442},
primaryClass={cs.CC}
} | gharibyan2007does |
arxiv-318 | 0705.1450 | An Algorithm for Computing Cusp Points in the Joint Space of 3-RPR Parallel Manipulators | <|reference_start|>An Algorithm for Computing Cusp Points in the Joint Space of 3-RPR Parallel Manipulators: This paper presents an algorithm for detecting and computing the cusp points in the joint space of 3-RPR planar parallel manipulators. In manipulator kinematics, cusp points are special points, which appear on the singular curves of the manipulators. The nonsingular change of assembly mode of 3-RPR parallel manipulators was shown to be associated with the existence of cusp points. At each of these points, three direct kinematic solutions coincide. In the literature, a condition for the existence of three coincident direct kinematic solutions was established, but has never been exploited, because the algebra involved was too complicated to be solved. The algorithm presented in this paper solves this equation and detects all the cusp points in the joint space of these manipulators.<|reference_end|> | arxiv | @article{zein2007an,
title={An Algorithm for Computing Cusp Points in the Joint Space of 3-RPR
Parallel Manipulators},
author={Mazen Zein (IRCCyN), Philippe Wenger (IRCCyN), Damien Chablat (IRCCyN)},
journal={European Conference on Mechanism Sciences (21/02/2006) 1-12},
year={2007},
archivePrefix={arXiv},
eprint={0705.1450},
primaryClass={cs.RO}
} | zein2007an |
arxiv-319 | 0705.1452 | Typer la d\'e-s\'erialisation sans s\'erialiser les types | <|reference_start|>Typer la d\'e-s\'erialisation sans s\'erialiser les types: In this paper, we propose a way of assigning static type information to unmarshalling functions and we describe a verification technique for unmarshalled data that preserves the execution safety provided by static type checking. This technique, whose correctness is proven, relies on singleton types whose values are transmitted to unmarshalling routines at runtime, and on an efficient checking algorithm able to deal with sharing and cycles.<|reference_end|> | arxiv | @article{henry2007typer,
title={Typer la d\'e-s\'erialisation sans s\'erialiser les types},
author={Gr'egoire Henry (PPS), Michel Mauny (INRIA Rocquencourt, ENSTA-UMA),
Emmanuel Chailloux (PPS)},
journal={Journ\'ee francophone des langages applicatifs (JFLA) 2006
(01/2006)},
year={2007},
archivePrefix={arXiv},
eprint={0705.1452},
primaryClass={cs.PL}
} | henry2007typer |
arxiv-320 | 0705.1453 | DWEB: A Data Warehouse Engineering Benchmark | <|reference_start|>DWEB: A Data Warehouse Engineering Benchmark: Data warehouse architectural choices and optimization techniques are critical to decision support query performance. To facilitate these choices, the performance of the designed data warehouse must be assessed. This is usually done with the help of benchmarks, which can either help system users comparing the performances of different systems, or help system engineers testing the effect of various design choices. While the TPC standard decision support benchmarks address the first point, they are not tuneable enough to address the second one and fail to model different data warehouse schemas. By contrast, our Data Warehouse Engineering Benchmark (DWEB) allows to generate various ad-hoc synthetic data warehouses and workloads. DWEB is fully parameterized to fulfill data warehouse design needs. However, two levels of parameterization keep it relatively easy to tune. Finally, DWEB is implemented as a Java free software that can be interfaced with most existing relational database management systems. A sample usage of DWEB is also provided in this paper.<|reference_end|> | arxiv | @article{darmont2007dweb:,
title={DWEB: A Data Warehouse Engineering Benchmark},
author={J'er^ome Darmont (ERIC), Fadila Bentayeb (ERIC), Omar Boussa"id
(ERIC)},
journal={LNCS, Vol. 3589 (08/2005) 85-94},
year={2007},
archivePrefix={arXiv},
eprint={0705.1453},
primaryClass={cs.DB}
} | darmont2007dweb: |
arxiv-321 | 0705.1454 | DOEF: A Dynamic Object Evaluation Framework | <|reference_start|>DOEF: A Dynamic Object Evaluation Framework: In object-oriented or object-relational databases such as multimedia databases or most XML databases, access patterns are not static, i.e., applications do not always access the same objects in the same order repeatedly. However, this has been the way these databases and associated optimisation techniques like clustering have been evaluated up to now. This paper opens up research regarding this issue by proposing a dynamic object evaluation framework (DOEF) that accomplishes access pattern change by defining configurable styles of change. This preliminary prototype has been designed to be open and fully extensible. To illustrate the capabilities of DOEF, we used it to compare the performances of four state of the art dynamic clustering algorithms. The results show that DOEF is indeed effective at determining the adaptability of each dynamic clustering algorithm to changes in access pattern.<|reference_end|> | arxiv | @article{he2007doef:,
title={DOEF: A Dynamic Object Evaluation Framework},
author={Zhen He, J'er^ome Darmont (ERIC)},
journal={LNCS, Vol. 2736 (09/2003) 662-671},
year={2007},
archivePrefix={arXiv},
eprint={0705.1454},
primaryClass={cs.DB}
} | he2007doef: |
arxiv-322 | 0705.1455 | Decision tree modeling with relational views | <|reference_start|>Decision tree modeling with relational views: Data mining is a useful decision support technique that can be used to discover production rules in warehouses or corporate data. Data mining research has made much effort to apply various mining algorithms efficiently on large databases. However, a serious problem in their practical application is the long processing time of such algorithms. Nowadays, one of the key challenges is to integrate data mining methods within the framework of traditional database systems. Indeed, such implementations can take advantage of the efficiency provided by SQL engines. In this paper, we propose an integrating approach for decision trees within a classical database system. In other words, we try to discover knowledge from relational databases, in the form of production rules, via a procedure embedding SQL queries. The obtained decision tree is defined by successive, related relational views. Each view corresponds to a given population in the underlying decision tree. We selected the classical Induction Decision Tree (ID3) algorithm to build the decision tree. To prove that our implementation of ID3 works properly, we successfully compared the output of our procedure with the output of an existing and validated data mining software, SIPINA. Furthermore, since our approach is tuneable, it can be generalized to any other similar decision tree-based method.<|reference_end|> | arxiv | @article{bentayeb2007decision,
title={Decision tree modeling with relational views},
author={Fadila Bentayeb (ERIC), J'er^ome Darmont (ERIC)},
journal={LNAI, Vol. 2366 (06/2002) 423-431},
year={2007},
archivePrefix={arXiv},
eprint={0705.1455},
primaryClass={cs.DB}
} | bentayeb2007decision |
arxiv-323 | 0705.1456 | Warehousing Web Data | <|reference_start|>Warehousing Web Data: In a data warehousing process, mastering the data preparation phase allows substantial gains in terms of time and performance when performing multidimensional analysis or using data mining algorithms. Furthermore, a data warehouse can require external data. The web is a prevalent data source in this context. In this paper, we propose a modeling process for integrating diverse and heterogeneous (so-called multiform) data into a unified format. Furthermore, the very schema definition provides first-rate metadata in our data warehousing context. At the conceptual level, a complex object is represented in UML. Our logical model is an XML schema that can be described with a DTD or the XML-Schema language. Eventually, we have designed a Java prototype that transforms our multiform input data into XML documents representing our physical model. Then, the XML documents we obtain are mapped into a relational database we view as an ODS (Operational Data Storage), whose content will have to be re-modeled in a multidimensional way to allow its storage in a star schema-based warehouse and, later, its analysis.<|reference_end|> | arxiv | @article{darmont2007warehousing,
title={Warehousing Web Data},
author={J'er^ome Darmont (ERIC), Omar Boussa"id (ERIC), Fadila Bentayeb
(ERIC)},
journal={4th International Conference on Information Integration and
Web-based Applications and Services (iiWAS 02) (09/2002) 148-152},
year={2007},
archivePrefix={arXiv},
eprint={0705.1456},
primaryClass={cs.DB}
} | darmont2007warehousing |
arxiv-324 | 0705.1457 | Web data modeling for integration in data warehouses | <|reference_start|>Web data modeling for integration in data warehouses: In a data warehousing process, the data preparation phase is crucial. Mastering this phase allows substantial gains in terms of time and performance when performing a multidimensional analysis or using data mining algorithms. Furthermore, a data warehouse can require external data. The web is a prevalent data source in this context, but the data broadcasted on this medium are very heterogeneous. We propose in this paper a UML conceptual model for a complex object representing a superclass of any useful data source (databases, plain texts, HTML and XML documents, images, sounds, video clips...). The translation into a logical model is achieved with XML, which helps integrating all these diverse, heterogeneous data into a unified format, and whose schema definition provides first-rate metadata in our data warehousing context. Moreover, we benefit from XML's flexibility, extensibility and from the richness of the semi-structured data model, but we are still able to later map XML documents into a database if more structuring is needed.<|reference_end|> | arxiv | @article{miniaoui2007web,
title={Web data modeling for integration in data warehouses},
author={Sami Miniaoui (ERIC), J'er^ome Darmont (ERIC), Omar Boussa"id
(ERIC)},
journal={First International Workshop on Multimedia Data and Document
Engineering (MDDE 01) (07/2001) 88-97},
year={2007},
archivePrefix={arXiv},
eprint={0705.1457},
primaryClass={cs.DB}
} | miniaoui2007web |
arxiv-325 | 0705.1458 | Mixing the Objective Caml and C# Programming Models in the Net Framework | <|reference_start|>Mixing the Objective Caml and C# Programming Models in the Net Framework: We present a new code generator, called O'Jacare.net, to inter-operate between C# and Objective Caml through their object models. O'Jacare.net defines a basic IDL (Interface Definition Language) that describes classes and interfaces in order to communicate between Objective Caml and C#. O'Jacare.net generates all needed wrapper classes and takes advantage of static type checking in both worlds. Although the IDL intersects these two object models, O'Jacare.net allows to combine features from both.<|reference_end|> | arxiv | @article{chailloux2007mixing,
title={Mixing the Objective Caml and C# Programming Models in the .Net
Framework},
author={Emmanuel Chailloux (PPS), Gr'egoire Henry (PPS), Rapha"el
Montelatici (PPS)},
journal={Workshop on MULTIPARADIGM PROGRAMMING WITH OO LANGUAGES (MPOOL),
Norv\`ege (06/2004)},
year={2007},
archivePrefix={arXiv},
eprint={0705.1458},
primaryClass={cs.PL}
} | chailloux2007mixing |
arxiv-326 | 0705.1481 | Actin - Technical Report | <|reference_start|>Actin - Technical Report: The Boolean satisfiability problem (SAT) can be solved efficiently with variants of the DPLL algorithm. For industrial SAT problems, DPLL with conflict analysis dependent dynamic decision heuristics has proved to be particularly efficient, e.g. in Chaff. In this work, algorithms that initialize the variable activity values in the solver MiniSAT v1.14 by analyzing the CNF are evolved using genetic programming (GP), with the goal to reduce the total number of conflicts of the search and the solving time. The effect of using initial activities other than zero is examined by initializing with random numbers. The possibility of countering the detrimental effects of reordering the CNF with improved initialization is investigated. The best result found (with validation testing on further problems) was used in the solver Actin, which was submitted to SAT-Race 2006.<|reference_end|> | arxiv | @article{kibria2007actin,
title={Actin - Technical Report},
author={Raihan H. Kibria},
journal={arXiv preprint arXiv:0705.1481},
year={2007},
archivePrefix={arXiv},
eprint={0705.1481},
primaryClass={cs.NE}
} | kibria2007actin |
arxiv-327 | 0705.1521 | A note on module-composed graphs | <|reference_start|>A note on module-composed graphs: In this paper we consider module-composed graphs, i.e. graphs which can be defined by a sequence of one-vertex insertions v_1,...,v_n, such that the neighbourhood of vertex v_i, 2<= i<= n, forms a module (a homogeneous set) of the graph defined by vertices v_1,..., v_{i-1}. We show that module-composed graphs are HHDS-free and thus homogeneously orderable, weakly chordal, and perfect. Every bipartite distance hereditary graph, every (co-2C_4,P_4)-free graph and thus every trivially perfect graph is module-composed. We give an O(|V_G|(|V_G|+|E_G|)) time algorithm to decide whether a given graph G is module-composed and construct a corresponding module-sequence. For the case of bipartite graphs, module-composed graphs are exactly distance hereditary graphs, which implies simple linear time algorithms for their recognition and construction of a corresponding module-sequence.<|reference_end|> | arxiv | @article{gurski2007a,
title={A note on module-composed graphs},
author={Frank Gurski},
journal={arXiv preprint arXiv:0705.1521},
year={2007},
archivePrefix={arXiv},
eprint={0705.1521},
primaryClass={cs.DS}
} | gurski2007a |
arxiv-328 | 0705.1541 | Unfolding Manhattan Towers | <|reference_start|>Unfolding Manhattan Towers: We provide an algorithm for unfolding the surface of any orthogonal polyhedron that falls into a particular shape class we call Manhattan Towers, to a nonoverlapping planar orthogonal polygon. The algorithm cuts along edges of a 4x5x1 refinement of the vertex grid.<|reference_end|> | arxiv | @article{damian2007unfolding,
title={Unfolding Manhattan Towers},
author={Mirela Damian, Robin Flatland, Joseph O'Rourke},
journal={arXiv preprint arXiv:0705.1541},
year={2007},
archivePrefix={arXiv},
eprint={0705.1541},
primaryClass={cs.CG cs.DM}
} | damian2007unfolding |
arxiv-329 | 0705.1583 | Wireless Networking to Support Data and Voice Communication Using Spread Spectrum Technology in The Physical Layer | <|reference_start|>Wireless Networking to Support Data and Voice Communication Using Spread Spectrum Technology in The Physical Layer: Wireless networking is rapidly growing and becomes an inexpensive technology which allows multiple users to simultaneously access the network and the internet while roaming about the campus. In the present work, the software development of a wireless LAN(WLAN) is highlighted. This WLAN utilizes direct sequence spread spectrum (DSSS) technology at 902MHz RF carrier frequency in its physical layer. Cost effective installation and antijaming property of spread spectrum technology are the major advantages of this work.<|reference_end|> | arxiv | @article{dhar2007wireless,
title={Wireless Networking to Support Data and Voice Communication Using Spread
Spectrum Technology in The Physical Layer},
author={Sourav Dhar and Rabindranath Bera},
journal={arXiv preprint arXiv:0705.1583},
year={2007},
archivePrefix={arXiv},
eprint={0705.1583},
primaryClass={cs.NI}
} | dhar2007wireless |
arxiv-330 | 0705.1585 | HMM Speaker Identification Using Linear and Non-linear Merging Techniques | <|reference_start|>HMM Speaker Identification Using Linear and Non-linear Merging Techniques: Speaker identification is a powerful, non-invasive and in-expensive biometric technique. The recognition accuracy, however, deteriorates when noise levels affect a specific band of frequency. In this paper, we present a sub-band based speaker identification that intends to improve the live testing performance. Each frequency sub-band is processed and classified independently. We also compare the linear and non-linear merging techniques for the sub-bands recognizer. Support vector machines and Gaussian Mixture models are the non-linear merging techniques that are investigated. Results showed that the sub-band based method used with linear merging techniques enormously improved the performance of the speaker identification over the performance of wide-band recognizers when tested live. A live testing improvement of 9.78% was achieved<|reference_end|> | arxiv | @article{mahola2007hmm,
title={HMM Speaker Identification Using Linear and Non-linear Merging
Techniques},
author={Unathi Mahola, Fulufhelo V. Nelwamondo, Tshilidzi Marwala},
journal={arXiv preprint arXiv:0705.1585},
year={2007},
archivePrefix={arXiv},
eprint={0705.1585},
primaryClass={cs.LG}
} | mahola2007hmm |
arxiv-331 | 0705.1612 | A Class of LDPC Erasure Distributions with Closed-Form Threshold Expression | <|reference_start|>A Class of LDPC Erasure Distributions with Closed-Form Threshold Expression: In this paper, a family of low-density parity-check (LDPC) degree distributions, whose decoding threshold on the binary erasure channel (BEC) admits a simple closed form, is presented. These degree distributions are a subset of the check regular distributions (i.e. all the check nodes have the same degree), and are referred to as $p$-positive distributions. It is given proof that the threshold for a $p$-positive distribution is simply expressed by $[\lambda'(0)\rho'(1)]^{-1}$. Besides this closed form threshold expression, the $p$-positive distributions exhibit three additional properties. First, for given code rate, check degree and maximum variable degree, they are in some cases characterized by a threshold which is extremely close to that of the best known check regular distributions, under the same set of constraints. Second, the threshold optimization problem within the $p$-positive class can be solved in some cases with analytic methods, without using any numerical optimization tool. Third, these distributions can achieve the BEC capacity. The last property is shown by proving that the well-known binomial degree distributions belong to the $p$-positive family.<|reference_end|> | arxiv | @article{paolini2007a,
title={A Class of LDPC Erasure Distributions with Closed-Form Threshold
Expression},
author={E. Paolini, M. Chiani},
journal={arXiv preprint arXiv:0705.1612},
year={2007},
archivePrefix={arXiv},
eprint={0705.1612},
primaryClass={cs.IT math.IT}
} | paolini2007a |
arxiv-332 | 0705.1617 | Non-Computability of Consciousness | <|reference_start|>Non-Computability of Consciousness: With the great success in simulating many intelligent behaviors using computing devices, there has been an ongoing debate whether all conscious activities are computational processes. In this paper, the answer to this question is shown to be no. A certain phenomenon of consciousness is demonstrated to be fully represented as a computational process using a quantum computer. Based on the computability criterion discussed with Turing machines, the model constructed is shown to necessarily involve a non-computable element. The concept that this is solely a quantum effect and does not work for a classical case is also discussed.<|reference_end|> | arxiv | @article{song2007non-computability,
title={Non-Computability of Consciousness},
author={Daegene Song},
journal={NeuroQuantology 5, 382 (2007).},
year={2007},
archivePrefix={arXiv},
eprint={0705.1617},
primaryClass={quant-ph astro-ph cs.AI}
} | song2007non-computability |
arxiv-333 | 0705.1672 | Principal Component Analysis and Automatic Relevance Determination in Damage Identification | <|reference_start|>Principal Component Analysis and Automatic Relevance Determination in Damage Identification: This paper compares two neural network input selection schemes, the Principal Component Analysis (PCA) and the Automatic Relevance Determination (ARD) based on Mac-Kay's evidence framework. The PCA takes all the input data and projects it onto a lower dimension space, thereby reduc-ing the dimension of the input space. This input reduction method often results with parameters that have significant influence on the dynamics of the data being diluted by those that do not influence the dynamics of the data. The ARD selects the most relevant input parameters and discards those that do not contribute significantly to the dynamics of the data being modelled. The ARD sometimes results with important input parameters being discarded thereby compromising the dynamics of the data. The PCA and ARD methods are implemented together with a Multi-Layer-Perceptron (MLP) network for fault identification in structures and the performance of the two methods is as-sessed. It is observed that ARD and PCA give similar accu-racy levels when used as input-selection schemes. There-fore, the choice of input-selection scheme is dependent on the nature of the data being processed.<|reference_end|> | arxiv | @article{mdlazi2007principal,
title={Principal Component Analysis and Automatic Relevance Determination in
Damage Identification},
author={L. Mdlazi, T. Marwala, C.J. Stander, C. Scheffer and P.S. Heyns},
journal={arXiv preprint arXiv:0705.1672},
year={2007},
archivePrefix={arXiv},
eprint={0705.1672},
primaryClass={cs.CE}
} | mdlazi2007principal |
arxiv-334 | 0705.1673 | Using artificial intelligence for data reduction in mechanical engineering | <|reference_start|>Using artificial intelligence for data reduction in mechanical engineering: In this paper artificial neural networks and support vector machines are used to reduce the amount of vibration data that is required to estimate the Time Domain Average of a gear vibration signal. Two models for estimating the time domain average of a gear vibration signal are proposed. The models are tested on data from an accelerated gear life test rig. Experimental results indicate that the required data for calculating the Time Domain Average of a gear vibration signal can be reduced by up to 75% when the proposed models are implemented.<|reference_end|> | arxiv | @article{mdlazi2007using,
title={Using artificial intelligence for data reduction in mechanical
engineering},
author={L. Mdlazi, C.J. Stander, P.S. Heyns and T. Marwala},
journal={arXiv preprint arXiv:0705.1673},
year={2007},
archivePrefix={arXiv},
eprint={0705.1673},
primaryClass={cs.CE cs.AI cs.NE}
} | mdlazi2007using |
arxiv-335 | 0705.1674 | Evolutionary Optimisation Methods for Template Based Image Registration | <|reference_start|>Evolutionary Optimisation Methods for Template Based Image Registration: This paper investigates the use of evolutionary optimisation techniques to register a template with a scene image. An error function is created to measure the correspondence of the template to the image. The problem presented here is to optimise the horizontal, vertical and scaling parameters that register the template with the scene. The Genetic Algorithm, Simulated Annealing and Particle Swarm Optimisations are compared to a Nelder-Mead Simplex optimisation with starting points chosen in a pre-processing stage. The paper investigates the precision and accuracy of each method and shows that all four methods perform favourably for image registration. SA is the most precise, GA is the most accurate. PSO is a good mix of both and the Simplex method returns local minima the most. A pre-processing stage should be investigated for the evolutionary methods in order to improve performance. Discrete versions of the optimisation methods should be investigated to further improve computational performance.<|reference_end|> | arxiv | @article{machowski2007evolutionary,
title={Evolutionary Optimisation Methods for Template Based Image Registration},
author={Lukasz A Machowski, Tshilidzi Marwala},
journal={arXiv preprint arXiv:0705.1674},
year={2007},
archivePrefix={arXiv},
eprint={0705.1674},
primaryClass={cs.CE cs.CV}
} | machowski2007evolutionary |
arxiv-336 | 0705.1680 | Option Pricing Using Bayesian Neural Networks | <|reference_start|>Option Pricing Using Bayesian Neural Networks: Options have provided a field of much study because of the complexity involved in pricing them. The Black-Scholes equations were developed to price options but they are only valid for European styled options. There is added complexity when trying to price American styled options and this is why the use of neural networks has been proposed. Neural Networks are able to predict outcomes based on past data. The inputs to the networks here are stock volatility, strike price and time to maturity with the output of the network being the call option price. There are two techniques for Bayesian neural networks used. One is Automatic Relevance Determination (for Gaussian Approximation) and one is a Hybrid Monte Carlo method, both used with Multi-Layer Perceptrons.<|reference_end|> | arxiv | @article{pires2007option,
title={Option Pricing Using Bayesian Neural Networks},
author={Michael Maio Pires, Tshilidzi Marwala},
journal={arXiv preprint arXiv:0705.1680},
year={2007},
archivePrefix={arXiv},
eprint={0705.1680},
primaryClass={cs.CE cs.NE}
} | pires2007option |
arxiv-337 | 0705.1682 | Capacity of Underspread Noncoherent WSSUS Fading Channels under Peak Signal Constraints | <|reference_start|>Capacity of Underspread Noncoherent WSSUS Fading Channels under Peak Signal Constraints: We characterize the capacity of the general class of noncoherent underspread wide-sense stationary uncorrelated scattering (WSSUS) time-frequency-selective Rayleigh fading channels, under peak constraints in time and frequency and in time only. Capacity upper and lower bounds are found which are explicit in the channel's scattering function and allow to identify the capacity-maximizing bandwidth for a given scattering function and a given peak-to-average power ratio.<|reference_end|> | arxiv | @article{durisi2007capacity,
title={Capacity of Underspread Noncoherent WSSUS Fading Channels under Peak
Signal Constraints},
author={Giuseppe Durisi, Helmut B"olcskei, Shlomo Shamai (Shitz)},
journal={arXiv preprint arXiv:0705.1682},
year={2007},
doi={10.1109/ISIT.2007.4557219},
archivePrefix={arXiv},
eprint={0705.1682},
primaryClass={cs.IT math.IT}
} | durisi2007capacity |
arxiv-338 | 0705.1750 | A Tighter Analysis of Setcover Greedy Algorithm for Test Set | <|reference_start|>A Tighter Analysis of Setcover Greedy Algorithm for Test Set: Setcover greedy algorithm is a natural approximation algorithm for test set problem. This paper gives a precise and tighter analysis of performance guarantee of this algorithm. The author improves the performance guarantee $2\ln n$ which derives from set cover problem to $1.1354\ln n$ by applying the potential function technique. In addition, the author gives a nontrivial lower bound $1.0004609\ln n$ of performance guarantee of this algorithm. This lower bound, together with the matching bound of information content heuristic, confirms the fact information content heuristic is slightly better than setcover greedy algorithm in worst case.<|reference_end|> | arxiv | @article{cui2007a,
title={A Tighter Analysis of Setcover Greedy Algorithm for Test Set},
author={Peng Cui},
journal={arXiv preprint arXiv:0705.1750},
year={2007},
archivePrefix={arXiv},
eprint={0705.1750},
primaryClass={cs.DS}
} | cui2007a |
arxiv-339 | 0705.1757 | Scalability and Optimisation of a Committee of Agents Using Genetic Algorithm | <|reference_start|>Scalability and Optimisation of a Committee of Agents Using Genetic Algorithm: A population of committees of agents that learn by using neural networks is implemented to simulate the stock market. Each committee of agents, which is regarded as a player in a game, is optimised by continually adapting the architecture of the agents using genetic algorithms. The committees of agents buy and sell stocks by following this procedure: (1) obtain the current price of stocks; (2) predict the future price of stocks; (3) and for a given price trade until all the players are mutually satisfied. The trading of stocks is conducted by following these rules: (1) if a player expects an increase in price then it tries to buy the stock; (2) else if it expects a drop in the price, it sells the stock; (3)and the order in which a player participates in the game is random. The proposed procedure is implemented to simulate trading of three stocks, namely, the Dow Jones, the Nasdaq and the S&P 500. A linear relationship between the number of players and agents versus the computational time to run the complete simulation is observed. It is also found that no player has a monopolistic advantage.<|reference_end|> | arxiv | @article{marwala2007scalability,
title={Scalability and Optimisation of a Committee of Agents Using Genetic
Algorithm},
author={T. Marwala, P. De Wilde, L. Correia, P. Mariano, R. Ribeiro, V.
Abramov, N. Szirbik, J.Goossenaerts},
journal={arXiv preprint arXiv:0705.1757},
year={2007},
archivePrefix={arXiv},
eprint={0705.1757},
primaryClass={cs.MA}
} | marwala2007scalability |
arxiv-340 | 0705.1759 | Finite Element Model Updating Using Response Surface Method | <|reference_start|>Finite Element Model Updating Using Response Surface Method: This paper proposes the response surface method for finite element model updating. The response surface method is implemented by approximating the finite element model surface response equation by a multi-layer perceptron. The updated parameters of the finite element model were calculated using genetic algorithm by optimizing the surface response equation. The proposed method was compared to the existing methods that use simulated annealing or genetic algorithm together with a full finite element model for finite element model updating. The proposed method was tested on an unsymmetri-cal H-shaped structure. It was observed that the proposed method gave the updated natural frequen-cies and mode shapes that were of the same order of accuracy as those given by simulated annealing and genetic algorithm. Furthermore, it was observed that the response surface method achieved these results at a computational speed that was more than 2.5 times as fast as the genetic algorithm and a full finite element model and 24 times faster than the simulated annealing.<|reference_end|> | arxiv | @article{marwala2007finite,
title={Finite Element Model Updating Using Response Surface Method},
author={Tshilidzi Marwala},
journal={arXiv preprint arXiv:0705.1759},
year={2007},
archivePrefix={arXiv},
eprint={0705.1759},
primaryClass={cs.CE}
} | marwala2007finite |
arxiv-341 | 0705.1760 | Dynamic Model Updating Using Particle Swarm Optimization Method | <|reference_start|>Dynamic Model Updating Using Particle Swarm Optimization Method: This paper proposes the use of particle swarm optimization method (PSO) for finite element (FE) model updating. The PSO method is compared to the existing methods that use simulated annealing (SA) or genetic algorithms (GA) for FE model for model updating. The proposed method is tested on an unsymmetrical H-shaped structure. It is observed that the proposed method gives updated natural frequencies the most accurate and followed by those given by an updated model that was obtained using the GA and a full FE model. It is also observed that the proposed method gives updated mode shapes that are best correlated to the measured ones, followed by those given by an updated model that was obtained using the SA and a full FE model. Furthermore, it is observed that the PSO achieves this accuracy at a computational speed that is faster than that by the GA and a full FE model which is faster than the SA and a full FE model.<|reference_end|> | arxiv | @article{marwala2007dynamic,
title={Dynamic Model Updating Using Particle Swarm Optimization Method},
author={Tshilidzi Marwala},
journal={arXiv preprint arXiv:0705.1760},
year={2007},
archivePrefix={arXiv},
eprint={0705.1760},
primaryClass={cs.CE cs.NE}
} | marwala2007dynamic |
arxiv-342 | 0705.1761 | Modeling and Controlling Interstate Conflict | <|reference_start|>Modeling and Controlling Interstate Conflict: Bayesian neural networks were used to model the relationship between input parameters, Democracy, Allies, Contingency, Distance, Capability, Dependency and Major Power, and the output parameter which is either peace or conflict. The automatic relevance determination was used to rank the importance of input variables. Control theory approach was used to identify input variables that would give a peaceful outcome. It was found that using all four controllable variables Democracy, Allies, Capability and Dependency; or using only Dependency or only Capabilities avoids all the predicted conflicts.<|reference_end|> | arxiv | @article{marwala2007modeling,
title={Modeling and Controlling Interstate Conflict},
author={Tshilidzi Marwala and Monica Lagazio},
journal={arXiv preprint arXiv:0705.1761},
year={2007},
archivePrefix={arXiv},
eprint={0705.1761},
primaryClass={cs.CY}
} | marwala2007modeling |
arxiv-343 | 0705.1787 | Energy-Efficient Resource Allocation in Wireless Networks: An Overview of Game-Theoretic Approaches | <|reference_start|>Energy-Efficient Resource Allocation in Wireless Networks: An Overview of Game-Theoretic Approaches: An overview of game-theoretic approaches to energy-efficient resource allocation in wireless networks is presented. Focusing on multiple-access networks, it is demonstrated that game theory can be used as an effective tool to study resource allocation in wireless networks with quality-of-service (QoS) constraints. A family of non-cooperative (distributed) games is presented in which each user seeks to choose a strategy that maximizes its own utility while satisfying its QoS requirements. The utility function considered here measures the number of reliable bits that are transmitted per joule of energy consumed and, hence, is particulary suitable for energy-constrained networks. The actions available to each user in trying to maximize its own utility are at least the choice of the transmit power and, depending on the situation, the user may also be able to choose its transmission rate, modulation, packet size, multiuser receiver, multi-antenna processing algorithm, or carrier allocation strategy. The best-response strategy and Nash equilibrium for each game is presented. Using this game-theoretic framework, the effects of power control, rate control, modulation, temporal and spatial signal processing, carrier allocation strategy and delay QoS constraints on energy efficiency and network capacity are quantified.<|reference_end|> | arxiv | @article{meshkati2007energy-efficient,
title={Energy-Efficient Resource Allocation in Wireless Networks: An Overview
of Game-Theoretic Approaches},
author={Farhad Meshkati, H. Vincent Poor and Stuart C. Schwartz},
journal={arXiv preprint arXiv:0705.1787},
year={2007},
doi={10.1109/MSP.2007.361602},
archivePrefix={arXiv},
eprint={0705.1787},
primaryClass={cs.IT cs.GT math.IT}
} | meshkati2007energy-efficient |
arxiv-344 | 0705.1788 | A Game-Theoretic Approach to Energy-Efficient Modulation in CDMA Networks with Delay QoS Constraints | <|reference_start|>A Game-Theoretic Approach to Energy-Efficient Modulation in CDMA Networks with Delay QoS Constraints: A game-theoretic framework is used to study the effect of constellation size on the energy efficiency of wireless networks for M-QAM modulation. A non-cooperative game is proposed in which each user seeks to choose its transmit power (and possibly transmit symbol rate) as well as the constellation size in order to maximize its own utility while satisfying its delay quality-of-service (QoS) constraint. The utility function used here measures the number of reliable bits transmitted per joule of energy consumed, and is particularly suitable for energy-constrained networks. The best-response strategies and Nash equilibrium solution for the proposed game are derived. It is shown that in order to maximize its utility (in bits per joule), a user must choose the lowest constellation size that can accommodate the user's delay constraint. This strategy is different from one that would maximize spectral efficiency. Using this framework, the tradeoffs among energy efficiency, delay, throughput and constellation size are also studied and quantified. In addition, the effect of trellis-coded modulation on energy efficiency is discussed.<|reference_end|> | arxiv | @article{meshkati2007a,
title={A Game-Theoretic Approach to Energy-Efficient Modulation in CDMA
Networks with Delay QoS Constraints},
author={Farhad Meshkati, Andrea J. Goldsmith, H. Vincent Poor and Stuart C.
Schwartz},
journal={arXiv preprint arXiv:0705.1788},
year={2007},
doi={10.1109/JSAC.2007.070802},
archivePrefix={arXiv},
eprint={0705.1788},
primaryClass={cs.IT cs.GT math.IT}
} | meshkati2007a |
arxiv-345 | 0705.1789 | Random Linear Network Coding: A free cipher? | <|reference_start|>Random Linear Network Coding: A free cipher?: We consider the level of information security provided by random linear network coding in network scenarios in which all nodes comply with the communication protocols yet are assumed to be potential eavesdroppers (i.e. "nice but curious"). For this setup, which differs from wiretapping scenarios considered previously, we develop a natural algebraic security criterion, and prove several of its key properties. A preliminary analysis of the impact of network topology on the overall network coding security, in particular for complete directed acyclic graphs, is also included.<|reference_end|> | arxiv | @article{lima2007random,
title={Random Linear Network Coding: A free cipher?},
author={Lu'isa Lima and Muriel M'edard and Jo~ao Barros},
journal={arXiv preprint arXiv:0705.1789},
year={2007},
archivePrefix={arXiv},
eprint={0705.1789},
primaryClass={cs.IT cs.CR math.IT}
} | lima2007random |
arxiv-346 | 0705.1876 | Scheduling Dags under Uncertainty | <|reference_start|>Scheduling Dags under Uncertainty: This paper introduces a parallel scheduling problem where a directed acyclic graph modeling $t$ tasks and their dependencies needs to be executed on $n$ unreliable workers. Worker $i$ executes task $j$ correctly with probability $p_{i,j}$. The goal is to find a regimen $\Sigma$, that dictates how workers get assigned to tasks (possibly in parallel and redundantly) throughout execution, so as to minimize the expected completion time. This fundamental parallel scheduling problem arises in grid computing and project management fields, and has several applications. We show a polynomial time algorithm for the problem restricted to the case when dag width is at most a constant and the number of workers is also at most a constant. These two restrictions may appear to be too severe. However, they are fundamentally required. Specifically, we demonstrate that the problem is NP-hard with constant number of workers when dag width can grow, and is also NP-hard with constant dag width when the number of workers can grow. When both dag width and the number of workers are unconstrained, then the problem is inapproximable within factor less than 5/4, unless P=NP.<|reference_end|> | arxiv | @article{malewicz2007scheduling,
title={Scheduling Dags under Uncertainty},
author={Grzegorz Malewicz},
journal={arXiv preprint arXiv:0705.1876},
year={2007},
archivePrefix={arXiv},
eprint={0705.1876},
primaryClass={cs.DS cs.DM}
} | malewicz2007scheduling |
arxiv-347 | 0705.1886 | Ontology-Supported and Ontology-Driven Conceptual Navigation on the World Wide Web | <|reference_start|>Ontology-Supported and Ontology-Driven Conceptual Navigation on the World Wide Web: This paper presents the principles of ontology-supported and ontology-driven conceptual navigation. Conceptual navigation realizes the independence between resources and links to facilitate interoperability and reusability. An engine builds dynamic links, assembles resources under an argumentative scheme and allows optimization with a possible constraint, such as the user's available time. Among several strategies, two are discussed in detail with examples of applications. On the one hand, conceptual specifications for linking and assembling are embedded in the resource meta-description with the support of the ontology of the domain to facilitate meta-communication. Resources are like agents looking for conceptual acquaintances with intention. On the other hand, the domain ontology and an argumentative ontology drive the linking and assembling strategies.<|reference_end|> | arxiv | @article{crampes2007ontology-supported,
title={Ontology-Supported and Ontology-Driven Conceptual Navigation on the
World Wide Web},
author={Michel Crampes (LGI2P), Sylvie Ranwez (LGI2P)},
journal={Proceedings Hypertext 2000 (2000) 80},
year={2007},
archivePrefix={arXiv},
eprint={0705.1886},
primaryClass={cs.IR}
} | crampes2007ontology-supported |
arxiv-348 | 0705.1915 | A Technical Report On Grid Benchmarking using ATLAS VO | <|reference_start|>A Technical Report On Grid Benchmarking using ATLAS VO: Grids include heterogeneous resources, which are based on different hardware and software architectures or components. In correspondence with this diversity of the infrastructure, the execution time of any single job, as well as the total grid performance can both be affected substantially, which can be demonstrated by measurements. Running a simple benchmarking suite can show this heterogeneity and give us results about the differences over the grid sites.<|reference_end|> | arxiv | @article{kouvakis2007a,
title={A Technical Report On Grid Benchmarking using ATLAS V.O},
author={John Kouvakis, Fotis Georgatos},
journal={arXiv preprint arXiv:0705.1915},
year={2007},
archivePrefix={arXiv},
eprint={0705.1915},
primaryClass={cs.PF}
} | kouvakis2007a |
arxiv-349 | 0705.1919 | Optimal Watermark Embedding and Detection Strategies Under Limited Detection Resources | <|reference_start|>Optimal Watermark Embedding and Detection Strategies Under Limited Detection Resources: An information-theoretic approach is proposed to watermark embedding and detection under limited detector resources. First, we consider the attack-free scenario under which asymptotically optimal decision regions in the Neyman-Pearson sense are proposed, along with the optimal embedding rule. Later, we explore the case of zero-mean i.i.d. Gaussian covertext distribution with unknown variance under the attack-free scenario. For this case, we propose a lower bound on the exponential decay rate of the false-negative probability and prove that the optimal embedding and detecting strategy is superior to the customary linear, additive embedding strategy in the exponential sense. Finally, these results are extended to the case of memoryless attacks and general worst case attacks. Optimal decision regions and embedding rules are offered, and the worst attack channel is identified.<|reference_end|> | arxiv | @article{merhav2007optimal,
title={Optimal Watermark Embedding and Detection Strategies Under Limited
Detection Resources},
author={Neri Merhav and Erez Sabbag},
journal={arXiv preprint arXiv:0705.1919},
year={2007},
doi={10.1109/ISIT.2006.261759},
archivePrefix={arXiv},
eprint={0705.1919},
primaryClass={cs.IT cs.CR math.IT}
} | merhav2007optimal |
arxiv-350 | 0705.1922 | Crystallization in large wireless networks | <|reference_start|>Crystallization in large wireless networks: We analyze fading interference relay networks where M single-antenna source-destination terminal pairs communicate concurrently and in the same frequency band through a set of K single-antenna relays using half-duplex two-hop relaying. Assuming that the relays have channel state information (CSI), it is shown that in the large-M limit, provided K grows fast enough as a function of M, the network "decouples" in the sense that the individual source-destination terminal pair capacities are strictly positive. The corresponding required rate of growth of K as a function of M is found to be sufficient to also make the individual source-destination fading links converge to nonfading links. We say that the network "crystallizes" as it breaks up into a set of effectively isolated "wires in the air". A large-deviations analysis is performed to characterize the "crystallization" rate, i.e., the rate (as a function of M,K) at which the decoupled links converge to nonfading links. In the course of this analysis, we develop a new technique for characterizing the large-deviations behavior of certain sums of dependent random variables. For the case of no CSI at the relay level, assuming amplify-and-forward relaying, we compute the per source-destination terminal pair capacity for M,K converging to infinity, with K/M staying fixed, using tools from large random matrix theory.<|reference_end|> | arxiv | @article{morgenshtern2007crystallization,
title={Crystallization in large wireless networks},
author={Veniamin I. Morgenshtern and Helmut Boelcskei},
journal={Information Theory, IEEE Transactions on , vol.53, no.10,
pp.3319-3349, Oct. 2007},
year={2007},
doi={10.1109/TIT.2007.904789},
archivePrefix={arXiv},
eprint={0705.1922},
primaryClass={cs.IT math.IT}
} | morgenshtern2007crystallization |
arxiv-351 | 0705.1925 | Double Sided Watermark Embedding and Detection with Perceptual Analysis | <|reference_start|>Double Sided Watermark Embedding and Detection with Perceptual Analysis: In our previous work, we introduced a double-sided technique that utilizes but not reject the host interference. Due to its nice property of utilizing but not rejecting the host interference, it has a big advantage over the host interference schemes in that the perceptual analysis can be easily implemented for our scheme to achieve the locally bounded maximum embedding strength. Thus, in this work, we detail how to implement the perceptual analysis in our double-sided schemes since the perceptual analysis is very important for improving the fidelity of watermarked contents. Through the extensive performance comparisons, we can further validate the performance advantage of our double-sided schemes.<|reference_end|> | arxiv | @article{zhong2007double,
title={Double Sided Watermark Embedding and Detection with Perceptual Analysis},
author={Jidong Zhong and Shangteng Huang},
journal={arXiv preprint arXiv:0705.1925},
year={2007},
archivePrefix={arXiv},
eprint={0705.1925},
primaryClass={cs.MM cs.CR}
} | zhong2007double |
arxiv-352 | 0705.1939 | Towards Informative Statistical Flow Inversion | <|reference_start|>Towards Informative Statistical Flow Inversion: A problem which has recently attracted research attention is that of estimating the distribution of flow sizes in internet traffic. On high traffic links it is sometimes impossible to record every packet. Researchers have approached the problem of estimating flow lengths from sampled packet data in two separate ways. Firstly, different sampling methodologies can be tried to more accurately measure the desired system parameters. One such method is the sample-and-hold method where, if a packet is sampled, all subsequent packets in that flow are sampled. Secondly, statistical methods can be used to ``invert'' the sampled data and produce an estimate of flow lengths from a sample. In this paper we propose, implement and test two variants on the sample-and-hold method. In addition we show how the sample-and-hold method can be inverted to get an estimation of the genuine distribution of flow sizes. Experiments are carried out on real network traces to compare standard packet sampling with three variants of sample-and-hold. The methods are compared for their ability to reconstruct the genuine distribution of flow sizes in the traffic.<|reference_end|> | arxiv | @article{clegg2007towards,
title={Towards Informative Statistical Flow Inversion},
author={Richard G. Clegg, Hamed Haddadi, Raul Landa, Miguel Rio},
journal={arXiv preprint arXiv:0705.1939},
year={2007},
archivePrefix={arXiv},
eprint={0705.1939},
primaryClass={cs.NI cs.PF}
} | clegg2007towards |
arxiv-353 | 0705.1956 | A Branch and Cut Algorithm for the Halfspace Depth Problem | <|reference_start|>A Branch and Cut Algorithm for the Halfspace Depth Problem: The concept of data depth in non-parametric multivariate descriptive statistics is the generalization of the univariate rank method to multivariate data. Halfspace depth is a measure of data depth. Given a set S of points and a point p, the halfspace depth (or rank) k of p is defined as the minimum number of points of S contained in any closed halfspace with p on its boundary. Computing halfspace depth is NP-hard, and it is equivalent to the Maximum Feasible Subsystem problem. In this thesis a mixed integer program is formulated with the big-M method for the halfspace depth problem. We suggest a branch and cut algorithm. In this algorithm, Chinneck's heuristic algorithm is used to find an upper bound and a related technique based on sensitivity analysis is used for branching. Irreducible Infeasible Subsystem (IIS) hitting set cuts are applied. We also suggest a binary search algorithm which may be more stable numerically. The algorithms are implemented with the BCP framework from the COIN-OR project.<|reference_end|> | arxiv | @article{chen2007a,
title={A Branch and Cut Algorithm for the Halfspace Depth Problem},
author={Dan Chen},
journal={arXiv preprint arXiv:0705.1956},
year={2007},
archivePrefix={arXiv},
eprint={0705.1956},
primaryClass={cs.CG}
} | chen2007a |
arxiv-354 | 0705.1970 | A Closed-Form Method for LRU Replacement under Generalized Power-Law Demand | <|reference_start|>A Closed-Form Method for LRU Replacement under Generalized Power-Law Demand: We consider the well known \emph{Least Recently Used} (LRU) replacement algorithm and analyze it under the independent reference model and generalized power-law demand. For this extensive family of demand distributions we derive a closed-form expression for the per object steady-state hit ratio. To the best of our knowledge, this is the first analytic derivation of the per object hit ratio of LRU that can be obtained in constant time without requiring laborious numeric computations or simulation. Since most applications of replacement algorithms include (at least) some scenarios under i.i.d. requests, our method has substantial practical value, especially when having to analyze multiple caches, where existing numeric methods and simulation become too time consuming.<|reference_end|> | arxiv | @article{laoutaris2007a,
title={A Closed-Form Method for LRU Replacement under Generalized Power-Law
Demand},
author={Nikolaos Laoutaris},
journal={arXiv preprint arXiv:0705.1970},
year={2007},
archivePrefix={arXiv},
eprint={0705.1970},
primaryClass={cs.DS}
} | laoutaris2007a |
arxiv-355 | 0705.1986 | On the Hopcroft's minimization algorithm | <|reference_start|>On the Hopcroft's minimization algorithm: We show that the absolute worst case time complexity for Hopcroft's minimization algorithm applied to unary languages is reached only for de Bruijn words. A previous paper by Berstel and Carton gave the example of de Bruijn words as a language that requires O(n log n) steps by carefully choosing the splitting sets and processing these sets in a FIFO mode. We refine the previous result by showing that the Berstel/Carton example is actually the absolute worst case time complexity in the case of unary languages. We also show that a LIFO implementation will not achieve the same worst time complexity for the case of unary languages. Lastly, we show that the same result is valid also for the cover automata and a modification of the Hopcroft's algorithm, modification used in minimization of cover automata.<|reference_end|> | arxiv | @article{paun2007on,
title={On the Hopcroft's minimization algorithm},
author={Andrei Paun},
journal={arXiv preprint arXiv:0705.1986},
year={2007},
archivePrefix={arXiv},
eprint={0705.1986},
primaryClass={cs.DS}
} | paun2007on |
arxiv-356 | 0705.1999 | A first-order Temporal Logic for Actions | <|reference_start|>A first-order Temporal Logic for Actions: We present a multi-modal action logic with first-order modalities, which contain terms which can be unified with the terms inside the subsequent formulas and which can be quantified. This makes it possible to handle simultaneously time and states. We discuss applications of this language to action theory where it is possible to express many temporal aspects of actions, as for example, beginning, end, time points, delayed preconditions and results, duration and many others. We present tableaux rules for a decidable fragment of this logic.<|reference_end|> | arxiv | @article{schwind2007a,
title={A first-order Temporal Logic for Actions},
author={Camilla Schwind (LIF)},
journal={arXiv preprint arXiv:0705.1999},
year={2007},
archivePrefix={arXiv},
eprint={0705.1999},
primaryClass={cs.AI cs.LO}
} | schwind2007a |
arxiv-357 | 0705.2009 | Bit-Interleaved Coded Multiple Beamforming with Imperfect CSIT | <|reference_start|>Bit-Interleaved Coded Multiple Beamforming with Imperfect CSIT: This paper addresses the performance of bit-interleaved coded multiple beamforming (BICMB) [1], [2] with imperfect knowledge of beamforming vectors. Most studies for limited-rate channel state information at the transmitter (CSIT) assume that the precoding matrix has an invariance property under an arbitrary unitary transform. In BICMB, this property does not hold. On the other hand, the optimum precoder and detector for BICMB are invariant under a diagonal unitary transform. In order to design a limited-rate CSIT system for BICMB, we propose a new distortion measure optimum under this invariance. Based on this new distortion measure, we introduce a new set of centroids and employ the generalized Lloyd algorithm for codebook design. We provide simulation results demonstrating the performance improvement achieved with the proposed distortion measure and the codebook design for various receivers with linear detectors. We show that although these receivers have the same performance for perfect CSIT, their performance varies under imperfect CSIT.<|reference_end|> | arxiv | @article{sengul2007bit-interleaved,
title={Bit-Interleaved Coded Multiple Beamforming with Imperfect CSIT},
author={Ersin Sengul, Hong Ju Park, Ender Ayanoglu},
journal={arXiv preprint arXiv:0705.2009},
year={2007},
archivePrefix={arXiv},
eprint={0705.2009},
primaryClass={cs.IT math.IT}
} | sengul2007bit-interleaved |
arxiv-358 | 0705.2011 | Multi-Dimensional Recurrent Neural Networks | <|reference_start|>Multi-Dimensional Recurrent Neural Networks: Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multidimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks (MDRNNs), thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.<|reference_end|> | arxiv | @article{graves2007multi-dimensional,
title={Multi-Dimensional Recurrent Neural Networks},
author={Alex Graves, Santiago Fernandez, Juergen Schmidhuber},
journal={arXiv preprint arXiv:0705.2011},
year={2007},
number={04-07},
archivePrefix={arXiv},
eprint={0705.2011},
primaryClass={cs.AI cs.CV}
} | graves2007multi-dimensional |
arxiv-359 | 0705.2065 | Mean Field Models of Message Throughput in Dynamic Peer-to-Peer Systems | <|reference_start|>Mean Field Models of Message Throughput in Dynamic Peer-to-Peer Systems: The churn rate of a peer-to-peer system places direct limitations on the rate at which messages can be effectively communicated to a group of peers. These limitations are independent of the topology and message transmission latency. In this paper we consider a peer-to-peer network, based on the Engset model, where peers arrive and depart independently at random. We show how the arrival and departure rates directly limit the capacity for message streams to be broadcast to all other peers, by deriving mean field models that accurately describe the system behavior. Our models cover the unit and more general k buffer cases, i.e. where a peer can buffer at most k messages at any one time, and we give results for both single and multi-source message streams. We define coverage rate as peer-messages per unit time, i.e. the rate at which a number of peers receive messages, and show that the coverage rate is limited by the churn rate and buffer size. Our theory introduces an Instantaneous Message Exchange (IME) model and provides a template for further analysis of more complicated systems. Using the IME model, and assuming random processes, we have obtained very accurate equations of the system dynamics in a variety of interesting cases, that allow us to tune a peer-to-peer system. It remains to be seen if we can maintain this accuracy for general processes and when applying a non-instantaneous model.<|reference_end|> | arxiv | @article{harwood2007mean,
title={Mean Field Models of Message Throughput in Dynamic Peer-to-Peer Systems},
author={Aaron Harwood, Olga Ohrimenko},
journal={arXiv preprint arXiv:0705.2065},
year={2007},
archivePrefix={arXiv},
eprint={0705.2065},
primaryClass={cs.DC cs.PF}
} | harwood2007mean |
arxiv-360 | 0705.2084 | CDMA Technology for Intelligent Transportation Systems | <|reference_start|>CDMA Technology for Intelligent Transportation Systems: Scientists and Technologists involved in the development of radar and remote sensing systems all over the world are now trying to involve themselves in saving of manpower in the form of developing a new application of their ideas in Intelligent Transport system(ITS). The world statistics shows that by incorporating such wireless radar system in the car would decrease the world road accident by 8-10% yearly. The wireless technology has to be chosen properly which is capable of tackling the severe interferences present in the open road. A combined digital technology like Spread spectrum along with diversity reception will help a lot in this regard. Accordingly, the choice is for FHSS based space diversity system which will utilize carrier frequency around 5.8 GHz ISM band with available bandwidth of 80 MHz and no license. For efficient design, the radio channel is characterized on which the design is based. Out of two available modes e.g. Communication and Radar modes, the radar mode is providing the conditional measurement of the range of the nearest car after authentication of the received code, thus ensuring the reliability and accuracy of measurement. To make the system operational in simultaneous mode, we have started the Software Defined Radio approach for best speed and flexibility.<|reference_end|> | arxiv | @article{bera2007cdma,
title={CDMA Technology for Intelligent Transportation Systems},
author={Rabindranath Bera, Jitendranath Bera, Sanjib Sil, Dipak Mondal, Sourav
Dhar and Debdatta Kandar},
journal={arXiv preprint arXiv:0705.2084},
year={2007},
archivePrefix={arXiv},
eprint={0705.2084},
primaryClass={cs.NI}
} | bera2007cdma |
arxiv-361 | 0705.2085 | RADAR Imaging in the Open field At 300 MHz-3000 MHz Radio Band | <|reference_start|>RADAR Imaging in the Open field At 300 MHz-3000 MHz Radio Band: With the technological growth of broadband wireless technology like CDMA and UWB, a lots of development efforts towards wireless communication system and Imaging radar system are well justified. Efforts are also being imparted towards a Convergence Technology.. the convergence between a communication and radar technology which will result in ITS (Intelligent Transport System) and other applications. This encourages present authors for this development. They are trying to utilize or converge the communication technologies towards radar and to achieve the Interference free and clutter free quality remote images of targets using DS-UWB wireless technology.<|reference_end|> | arxiv | @article{bera2007radar,
title={RADAR Imaging in the Open field At 300 MHz-3000 MHz Radio Band},
author={Rabindranath Bera, Jitendranath Bera, Sanjib Sil, Sourav Dhar,
Debdatta Kandar, Dipak Mondal},
journal={arXiv preprint arXiv:0705.2085},
year={2007},
archivePrefix={arXiv},
eprint={0705.2085},
primaryClass={cs.NI}
} | bera2007radar |
arxiv-362 | 0705.2106 | Scientific citations in Wikipedia | <|reference_start|>Scientific citations in Wikipedia: The Internet-based encyclopaedia Wikipedia has grown to become one of the most visited web-sites on the Internet. However, critics have questioned the quality of entries, and an empirical study has shown Wikipedia to contain errors in a 2005 sample of science entries. Biased coverage and lack of sources are among the "Wikipedia risks". The present work describes a simple assessment of these aspects by examining the outbound links from Wikipedia articles to articles in scientific journals with a comparison against journal statistics from Journal Citation Reports such as impact factors. The results show an increasing use of structured citation markup and good agreement with the citation pattern seen in the scientific literature though with a slight tendency to cite articles in high-impact journals such as Nature and Science. These results increase confidence in Wikipedia as an good information organizer for science in general.<|reference_end|> | arxiv | @article{nielsen2007scientific,
title={Scientific citations in Wikipedia},
author={Finn Aarup Nielsen},
journal={First Monday, 12(8), 2007 August},
year={2007},
archivePrefix={arXiv},
eprint={0705.2106},
primaryClass={cs.DL cs.IR}
} | nielsen2007scientific |
arxiv-363 | 0705.2125 | Parallelized approximation algorithms for minimum routing cost spanning trees | <|reference_start|>Parallelized approximation algorithms for minimum routing cost spanning trees: We parallelize several previously proposed algorithms for the minimum routing cost spanning tree problem and some related problems.<|reference_end|> | arxiv | @article{chang2007parallelized,
title={Parallelized approximation algorithms for minimum routing cost spanning
trees},
author={Ching-Lueh Chang, Yuh-Dauh Lyuu},
journal={arXiv preprint arXiv:0705.2125},
year={2007},
archivePrefix={arXiv},
eprint={0705.2125},
primaryClass={cs.DS cs.CC}
} | chang2007parallelized |
arxiv-364 | 0705.2126 | Improvements to the Psi-SSA representation | <|reference_start|>Improvements to the Psi-SSA representation: Modern compiler implementations use the Static Single Assignment representation as a way to efficiently implement optimizing algorithms. However this representation is not well adapted to architectures with a predicated instruction set. The Psi-SSA representation extends the SSA representation such that standard SSA algorithms can be easily adapted to an architecture with a fully predicated instruction set. A new pseudo operation, the Psi operation, is introduced to merge several conditional definitions into a unique definition.<|reference_end|> | arxiv | @article{de ferriere2007improvements,
title={Improvements to the Psi-SSA representation},
author={Francois De Ferriere},
journal={Published in proceedings for the workshop "Software and Compilers
for Embedded Systems (SCOPES) 2007" (20/04/2007)},
year={2007},
archivePrefix={arXiv},
eprint={0705.2126},
primaryClass={cs.PL}
} | de ferriere2007improvements |
arxiv-365 | 0705.2137 | Best insertion algorithm for resource-constrained project scheduling problem | <|reference_start|>Best insertion algorithm for resource-constrained project scheduling problem: This paper considers heuristics for well known resource-constrained project scheduling problem (RCPSP). First a feasible schedule is constructed using randomized best insertion algorithm. The construction is followed by a local search where a new solution is generated as follows: first we randomly delete m activities from the list, which are then reinserted in the list in consecutive order. At the end of run, the schedule with the minimum makespan is selected. Experimental work shows very good results on standard test instances found in PSPLIB<|reference_end|> | arxiv | @article{pesek2007best,
title={Best insertion algorithm for resource-constrained project scheduling
problem},
author={Igor Pesek, Janez v{Z}erovnik},
journal={arXiv preprint arXiv:0705.2137},
year={2007},
archivePrefix={arXiv},
eprint={0705.2137},
primaryClass={cs.DM}
} | pesek2007best |
arxiv-366 | 0705.2145 | Elementary transformation analysis for Array-OL | <|reference_start|>Elementary transformation analysis for Array-OL: Array-OL is a high-level specification language dedicated to the definition of intensive signal processing applications. Several tools exist for implementing an Array-OL specification as a data parallel program. While Array-OL can be used directly, it is often convenient to be able to deduce part of the specification from a sequential version of the application. This paper proposes such an analysis and examines its feasibility and its limits.<|reference_end|> | arxiv | @article{feautrier2007elementary,
title={Elementary transformation analysis for Array-OL},
author={Paul Feautrier (LIP, INRIA Rh^one-Alpes)},
journal={arXiv preprint arXiv:0705.2145},
year={2007},
archivePrefix={arXiv},
eprint={0705.2145},
primaryClass={cs.PL}
} | feautrier2007elementary |
arxiv-367 | 0705.2147 | On the freezing of variables in random constraint satisfaction problems | <|reference_start|>On the freezing of variables in random constraint satisfaction problems: The set of solutions of random constraint satisfaction problems (zero energy groundstates of mean-field diluted spin glasses) undergoes several structural phase transitions as the amount of constraints is increased. This set first breaks down into a large number of well separated clusters. At the freezing transition, which is in general distinct from the clustering one, some variables (spins) take the same value in all solutions of a given cluster. In this paper we study the critical behavior around the freezing transition, which appears in the unfrozen phase as the divergence of the sizes of the rearrangements induced in response to the modification of a variable. The formalism is developed on generic constraint satisfaction problems and applied in particular to the random satisfiability of boolean formulas and to the coloring of random graphs. The computation is first performed in random tree ensembles, for which we underline a connection with percolation models and with the reconstruction problem of information theory. The validity of these results for the original random ensembles is then discussed in the framework of the cavity method.<|reference_end|> | arxiv | @article{semerjian2007on,
title={On the freezing of variables in random constraint satisfaction problems},
author={Guilhem Semerjian},
journal={J. Stat. Phys. 130, 251 (2008)},
year={2007},
doi={10.1007/s10955-007-9417-7},
archivePrefix={arXiv},
eprint={0705.2147},
primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.CC math.PR}
} | semerjian2007on |
arxiv-368 | 0705.2170 | Sequential mechanism design | <|reference_start|>Sequential mechanism design: In the customary VCG (Vickrey-Clarke-Groves) mechanism truth-telling is a dominant strategy. In this paper we study the sequential VCG mechanism and show that other dominant strategies may then exist. We illustrate how this fact can be used to minimize taxes using examples concerned with Clarke tax and public projects.<|reference_end|> | arxiv | @article{apt2007sequential,
title={Sequential mechanism design},
author={Krzysztof R. Apt and Arantza Est'evez-Fern'andez},
journal={arXiv preprint arXiv:0705.2170},
year={2007},
archivePrefix={arXiv},
eprint={0705.2170},
primaryClass={cs.GT}
} | apt2007sequential |
arxiv-369 | 0705.2205 | From Nondeterministic B\"uchi and Streett Automata to Deterministic Parity Automata | <|reference_start|>From Nondeterministic B\"uchi and Streett Automata to Deterministic Parity Automata: In this paper we revisit Safra's determinization constructions for automata on infinite words. We show how to construct deterministic automata with fewer states and, most importantly, parity acceptance conditions. Determinization is used in numerous applications, such as reasoning about tree automata, satisfiability of CTL*, and realizability and synthesis of logical specifications. The upper bounds for all these applications are reduced by using the smaller deterministic automata produced by our construction. In addition, the parity acceptance conditions allows to use more efficient algorithms (when compared to handling Rabin or Streett acceptance conditions).<|reference_end|> | arxiv | @article{piterman2007from,
title={From Nondeterministic B\"uchi and Streett Automata to Deterministic
Parity Automata},
author={Nir Piterman},
journal={Logical Methods in Computer Science, Volume 3, Issue 3 (August 14,
2007) lmcs:1199},
year={2007},
doi={10.2168/LMCS-3(3:5)2007},
archivePrefix={arXiv},
eprint={0705.2205},
primaryClass={cs.LO cs.FL}
} | piterman2007from |
arxiv-370 | 0705.2229 | On tractability and congruence distributivity | <|reference_start|>On tractability and congruence distributivity: Constraint languages that arise from finite algebras have recently been the object of study, especially in connection with the Dichotomy Conjecture of Feder and Vardi. An important class of algebras are those that generate congruence distributive varieties and included among this class are lattices, and more generally, those algebras that have near-unanimity term operations. An algebra will generate a congruence distributive variety if and only if it has a sequence of ternary term operations, called Jonsson terms, that satisfy certain equations. We prove that constraint languages consisting of relations that are invariant under a short sequence of Jonsson terms are tractable by showing that such languages have bounded relational width.<|reference_end|> | arxiv | @article{kiss2007on,
title={On tractability and congruence distributivity},
author={Emil Kiss, Matthew Valeriote},
journal={Logical Methods in Computer Science, Volume 3, Issue 2 (June 8,
2007) lmcs:1005},
year={2007},
doi={10.2168/LMCS-3(2:6)2007},
archivePrefix={arXiv},
eprint={0705.2229},
primaryClass={cs.CC cs.LO}
} | kiss2007on |
arxiv-371 | 0705.2235 | Response Prediction of Structural System Subject to Earthquake Motions using Artificial Neural Network | <|reference_start|>Response Prediction of Structural System Subject to Earthquake Motions using Artificial Neural Network: This paper uses Artificial Neural Network (ANN) models to compute response of structural system subject to Indian earthquakes at Chamoli and Uttarkashi ground motion data. The system is first trained for a single real earthquake data. The trained ANN architecture is then used to simulate earthquakes with various intensities and it was found that the predicted responses given by ANN model are accurate for practical purposes. When the ANN is trained by a part of the ground motion data, it can also identify the responses of the structural system well. In this way the safeness of the structural systems may be predicted in case of future earthquakes without waiting for the earthquake to occur for the lessons. Time period and the corresponding maximum response of the building for an earthquake has been evaluated, which is again trained to predict the maximum response of the building at different time periods. The trained time period versus maximum response ANN model is also tested for real earthquake data of other place, which was not used in the training and was found to be in good agreement.<|reference_end|> | arxiv | @article{chakraverty2007response,
title={Response Prediction of Structural System Subject to Earthquake Motions
using Artificial Neural Network},
author={S. Chakraverty, T. Marwala, Pallavi Gupta and Thando Tettey},
journal={arXiv preprint arXiv:0705.2235},
year={2007},
archivePrefix={arXiv},
eprint={0705.2235},
primaryClass={cs.AI}
} | chakraverty2007response |
arxiv-372 | 0705.2236 | Fault Classification using Pseudomodal Energies and Neuro-fuzzy modelling | <|reference_start|>Fault Classification using Pseudomodal Energies and Neuro-fuzzy modelling: This paper presents a fault classification method which makes use of a Takagi-Sugeno neuro-fuzzy model and Pseudomodal energies calculated from the vibration signals of cylindrical shells. The calculation of Pseudomodal Energies, for the purposes of condition monitoring, has previously been found to be an accurate method of extracting features from vibration signals. This calculation is therefore used to extract features from vibration signals obtained from a diverse population of cylindrical shells. Some of the cylinders in the population have faults in different substructures. The pseudomodal energies calculated from the vibration signals are then used as inputs to a neuro-fuzzy model. A leave-one-out cross-validation process is used to test the performance of the model. It is found that the neuro-fuzzy model is able to classify faults with an accuracy of 91.62%, which is higher than the previously used multilayer perceptron.<|reference_end|> | arxiv | @article{marwala2007fault,
title={Fault Classification using Pseudomodal Energies and Neuro-fuzzy
modelling},
author={Tshilidzi Marwala, Thando Tettey and Snehashish Chakraverty},
journal={arXiv preprint arXiv:0705.2236},
year={2007},
archivePrefix={arXiv},
eprint={0705.2236},
primaryClass={cs.AI}
} | marwala2007fault |
arxiv-373 | 0705.2270 | Multi-Access MIMO Systems with Finite Rate Channel State Feedback | <|reference_start|>Multi-Access MIMO Systems with Finite Rate Channel State Feedback: This paper characterizes the effect of finite rate channel state feedback on the sum rate of a multi-access multiple-input multiple-output (MIMO) system. We propose to control the users jointly, specifically, we first choose the users jointly and then select the corresponding beamforming vectors jointly. To quantify the sum rate, this paper introduces the composite Grassmann manifold and the composite Grassmann matrix. By characterizing the distortion rate function on the composite Grassmann manifold and calculating the logdet function of a random composite Grassmann matrix, a good sum rate approximation is derived. According to the distortion rate function on the composite Grassmann manifold, the loss due to finite beamforming decreases exponentially as the feedback bits on beamforming increases.<|reference_end|> | arxiv | @article{dai2007multi-access,
title={Multi-Access MIMO Systems with Finite Rate Channel State Feedback},
author={Wei Dai, Brian Rider and Youjian Liu},
journal={arXiv preprint arXiv:0705.2270},
year={2007},
archivePrefix={arXiv},
eprint={0705.2270},
primaryClass={cs.IT math.IT}
} | dai2007multi-access |
arxiv-374 | 0705.2272 | Quantization Bounds on Grassmann Manifolds of Arbitrary Dimensions and MIMO Communications with Feedback | <|reference_start|>Quantization Bounds on Grassmann Manifolds of Arbitrary Dimensions and MIMO Communications with Feedback: This paper considers the quantization problem on the Grassmann manifold with dimension n and p. The unique contribution is the derivation of a closed-form formula for the volume of a metric ball in the Grassmann manifold when the radius is sufficiently small. This volume formula holds for Grassmann manifolds with arbitrary dimension n and p, while previous results are only valid for either p=1 or a fixed p with asymptotically large n. Based on the volume formula, the Gilbert-Varshamov and Hamming bounds for sphere packings are obtained. Assuming a uniformly distributed source and a distortion metric based on the squared chordal distance, tight lower and upper bounds are established for the distortion rate tradeoff. Simulation results match the derived results. As an application of the derived quantization bounds, the information rate of a Multiple-Input Multiple-Output (MIMO) system with finite-rate channel-state feedback is accurately quantified for arbitrary finite number of antennas, while previous results are only valid for either Multiple-Input Single-Output (MISO) systems or those with asymptotically large number of transmit antennas but fixed number of receive antennas.<|reference_end|> | arxiv | @article{dai2007quantization,
title={Quantization Bounds on Grassmann Manifolds of Arbitrary Dimensions and
MIMO Communications with Feedback},
author={Wei Dai, Youjian Liu and Brian Rider},
journal={arXiv preprint arXiv:0705.2272},
year={2007},
archivePrefix={arXiv},
eprint={0705.2272},
primaryClass={cs.IT math.IT}
} | dai2007quantization |
arxiv-375 | 0705.2273 | On the Information Rate of MIMO Systems with Finite Rate Channel State Feedback and Power On/Off Strategy | <|reference_start|>On the Information Rate of MIMO Systems with Finite Rate Channel State Feedback and Power On/Off Strategy: This paper quantifies the information rate of multiple-input multiple-output (MIMO) systems with finite rate channel state feedback and power on/off strategy. In power on/off strategy, a beamforming vector (beam) is either turned on (denoted by on-beam) with a constant power or turned off. We prove that the ratio of the optimal number of on-beams and the number of antennas converges to a constant for a given signal-to-noise ratio (SNR) when the number of transmit and receive antennas approaches infinity simultaneously and when beamforming is perfect. Based on this result, a near optimal strategy, i.e., power on/off strategy with a constant number of on-beams, is discussed. For such a strategy, we propose the power efficiency factor to quantify the effect of imperfect beamforming. A formula is proposed to compute the maximum power efficiency factor achievable given a feedback rate. The information rate of the overall MIMO system can be approximated by combining the asymptotic results and the formula for power efficiency factor. Simulations show that this approximation is accurate for all SNR regimes.<|reference_end|> | arxiv | @article{dai2007on,
title={On the Information Rate of MIMO Systems with Finite Rate Channel State
Feedback and Power On/Off Strategy},
author={Wei Dai, Youjian Liu, Brian Rider and Vincent K.N. Lau},
journal={arXiv preprint arXiv:0705.2273},
year={2007},
archivePrefix={arXiv},
eprint={0705.2273},
primaryClass={cs.IT math.IT}
} | dai2007on |
arxiv-376 | 0705.2274 | How Many Users should be Turned On in a Multi-Antenna Broadcast Channel? | <|reference_start|>How Many Users should be Turned On in a Multi-Antenna Broadcast Channel?: This paper considers broadcast channels with L antennas at the base station and m single-antenna users, where each user has perfect channel knowledge and the base station obtains channel information through a finite rate feedback. The key observation of this paper is that the optimal number of on-users (users turned on), say s, is a function of signal-to-noise ratio (SNR) and other system parameters. Towards this observation, we use asymptotic analysis to guide the design of feedback and transmission strategies. As L, m and the feedback rates approach infinity linearly, we derive the asymptotic optimal feedback strategy and a realistic criterion to decide which users should be turned on. Define the corresponding asymptotic throughput per antenna as the spatial efficiency. It is a function of the number of on-users s, and therefore, s should be appropriately chosen. Based on the above asymptotic results, we also develop a scheme for a system with finite many antennas and users. Compared with other works where s is presumed constant, our scheme achieves a significant gain by choosing the appropriate s. Furthermore, our analysis and scheme is valid for heterogeneous systems where different users may have different path loss coefficients and feedback rates.<|reference_end|> | arxiv | @article{dai2007how,
title={How Many Users should be Turned On in a Multi-Antenna Broadcast Channel?},
author={Wei Dai, Youjian (Eugene) Liu and Brian Rider},
journal={arXiv preprint arXiv:0705.2274},
year={2007},
archivePrefix={arXiv},
eprint={0705.2274},
primaryClass={cs.IT math.IT}
} | dai2007how |
arxiv-377 | 0705.2278 | Unequal dimensional small balls and quantization on Grassmann Manifolds | <|reference_start|>Unequal dimensional small balls and quantization on Grassmann Manifolds: The Grassmann manifold G_{n,p}(L) is the set of all p-dimensional planes (through the origin) in the n-dimensional Euclidean space L^{n}, where L is either R or C. This paper considers an unequal dimensional quantization in which a source in G_{n,p}(L) is quantized through a code in G_{n,q}(L), where p and q are not necessarily the same. It is different from most works in literature where p\equiv q. The analysis for unequal dimensional quantization is based on the volume of a metric ball in G_{n,p}(L) whose center is in G_{n,q}(L). Our chief result is a closed-form formula for the volume of a metric ball when the radius is sufficiently small. This volume formula holds for Grassmann manifolds with arbitrary n, p, q and L, while previous results pertained only to some special cases. Based on this volume formula, several bounds are derived for the rate distortion tradeoff assuming the quantization rate is sufficiently high. The lower and upper bounds on the distortion rate function are asymptotically identical, and so precisely quantify the asymptotic rate distortion tradeoff. We also show that random codes are asymptotically optimal in the sense that they achieve the minimum achievable distortion with probability one as n and the code rate approach infinity linearly. Finally, we discuss some applications of the derived results to communication theory. A geometric interpretation in the Grassmann manifold is developed for capacity calculation of additive white Gaussian noise channel. Further, the derived distortion rate function is beneficial to characterizing the effect of beamforming matrix selection in multi-antenna communications.<|reference_end|> | arxiv | @article{dai2007unequal,
title={Unequal dimensional small balls and quantization on Grassmann Manifolds},
author={Wei Dai, Brian Rider and Youjian Liu},
journal={arXiv preprint arXiv:0705.2278},
year={2007},
doi={10.1109/ISIT.2007.4557483},
archivePrefix={arXiv},
eprint={0705.2278},
primaryClass={cs.IT math.IT}
} | dai2007unequal |
arxiv-378 | 0705.2305 | Fuzzy and Multilayer Perceptron for Evaluation of HV Bushings | <|reference_start|>Fuzzy and Multilayer Perceptron for Evaluation of HV Bushings: The work proposes the application of fuzzy set theory (FST) to diagnose the condition of high voltage bushings. The diagnosis uses dissolved gas analysis (DGA) data from bushings based on IEC60599 and IEEE C57-104 criteria for oil impregnated paper (OIP) bushings. FST and neural networks are compared in terms of accuracy and computational efficiency. Both FST and NN simulations were able to diagnose the bushings condition with 10% error. By using fuzzy theory, the maintenance department can classify bushings and know the extent of degradation in the component.<|reference_end|> | arxiv | @article{dhlamini2007fuzzy,
title={Fuzzy and Multilayer Perceptron for Evaluation of HV Bushings},
author={Sizwe M. Dhlamini, Tshilidzi Marwala, and Thokozani Majozi},
journal={arXiv preprint arXiv:0705.2305},
year={2007},
archivePrefix={arXiv},
eprint={0705.2305},
primaryClass={cs.AI cs.NE}
} | dhlamini2007fuzzy |
arxiv-379 | 0705.2307 | A Study in a Hybrid Centralised-Swarm Agent Community | <|reference_start|>A Study in a Hybrid Centralised-Swarm Agent Community: This paper describes a systems architecture for a hybrid Centralised/Swarm based multi-agent system. The issue of local goal assignment for agents is investigated through the use of a global agent which teaches the agents responses to given situations. We implement a test problem in the form of a Pursuit game, where the Multi-Agent system is a set of captor agents. The agents learn solutions to certain board positions from the global agent if they are unable to find a solution. The captor agents learn through the use of multi-layer perceptron neural networks. The global agent is able to solve board positions through the use of a Genetic Algorithm. The cooperation between agents and the results of the simulation are discussed here. .<|reference_end|> | arxiv | @article{van aardt2007a,
title={A Study in a Hybrid Centralised-Swarm Agent Community},
author={Bradley van Aardt, Tshilidzi Marwala},
journal={arXiv preprint arXiv:0705.2307},
year={2007},
archivePrefix={arXiv},
eprint={0705.2307},
primaryClass={cs.NE cs.AI}
} | van aardt2007a |
arxiv-380 | 0705.2310 | On-Line Condition Monitoring using Computational Intelligence | <|reference_start|>On-Line Condition Monitoring using Computational Intelligence: This paper presents bushing condition monitoring frameworks that use multi-layer perceptrons (MLP), radial basis functions (RBF) and support vector machines (SVM) classifiers. The first level of the framework determines if the bushing is faulty or not while the second level determines the type of fault. The diagnostic gases in the bushings are analyzed using the dissolve gas analysis. MLP gives superior performance in terms of accuracy and training time than SVM and RBF. In addition, an on-line bushing condition monitoring approach, which is able to adapt to newly acquired data are introduced. This approach is able to accommodate new classes that are introduced by incoming data and is implemented using an incremental learning algorithm that uses MLP. The testing results improved from 67.5% to 95.8% as new data were introduced and the testing results improved from 60% to 95.3% as new conditions were introduced. On average the confidence value of the framework on its decision was 0.92.<|reference_end|> | arxiv | @article{vilakazi2007on-line,
title={On-Line Condition Monitoring using Computational Intelligence},
author={C.B. Vilakazi, T. Marwala, P. Mautla and E. Moloto},
journal={arXiv preprint arXiv:0705.2310},
year={2007},
archivePrefix={arXiv},
eprint={0705.2310},
primaryClass={cs.AI}
} | vilakazi2007on-line |
arxiv-381 | 0705.2313 | TrustMIX: Trustworthy MIX for Energy Saving in Sensor Networks | <|reference_start|>TrustMIX: Trustworthy MIX for Energy Saving in Sensor Networks: MIX has recently been proposed as a new sensor scheme with better energy management for data-gathering in Wireless Sensor Networks. However, it is not known how it performs when some of the sensors carry out sinkhole attacks. In this paper, we propose a variant of MIX with adjunct computational trust management to limit the impact of such sinkhole attacks. We evaluate how MIX resists sinkhole attacks with and without computational trust management. The main result of this paper is to find that MIX is very vulnerable to sinkhole attacks but that the adjunct trust management efficiently reduces the impact of such attacks while preserving the main feature of MIX: increased lifetime of the network.<|reference_end|> | arxiv | @article{powell2007trustmix:,
title={TrustMIX: Trustworthy MIX for Energy Saving in Sensor Networks},
author={Olivier Powell, Luminita Moraru, Jean-Marc Seigneur},
journal={arXiv preprint arXiv:0705.2313},
year={2007},
archivePrefix={arXiv},
eprint={0705.2313},
primaryClass={cs.DC cs.CR cs.NI}
} | powell2007trustmix: |
arxiv-382 | 0705.2318 | Statistical Mechanics of Nonlinear On-line Learning for Ensemble Teachers | <|reference_start|>Statistical Mechanics of Nonlinear On-line Learning for Ensemble Teachers: We analyze the generalization performance of a student in a model composed of nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We calculate the generalization error of the student analytically or numerically using statistical mechanics in the framework of on-line learning. We treat two well-known learning rules: Hebbian learning and perceptron learning. As a result, it is proven that the nonlinear model shows qualitatively different behaviors from the linear model. Moreover, it is clarified that Hebbian learning and perceptron learning show qualitatively different behaviors from each other. In Hebbian learning, we can analytically obtain the solutions. In this case, the generalization error monotonically decreases. The steady value of the generalization error is independent of the learning rate. The larger the number of teachers is and the more variety the ensemble teachers have, the smaller the generalization error is. In perceptron learning, we have to numerically obtain the solutions. In this case, the dynamical behaviors of the generalization error are non-monotonic. The smaller the learning rate is, the larger the number of teachers is; and the more variety the ensemble teachers have, the smaller the minimum value of the generalization error is.<|reference_end|> | arxiv | @article{utsumi2007statistical,
title={Statistical Mechanics of Nonlinear On-line Learning for Ensemble
Teachers},
author={Hideto Utsumi, Seiji Miyoshi, Masato Okada},
journal={arXiv preprint arXiv:0705.2318},
year={2007},
doi={10.1143/JPSJ.76.114001},
archivePrefix={arXiv},
eprint={0705.2318},
primaryClass={cs.LG cond-mat.dis-nn}
} | utsumi2007statistical |
arxiv-383 | 0705.2351 | The Use of ITIL for Process Optimisation in the IT Service Centre of Harz University, exemplified in the Release Management Process | <|reference_start|>The Use of ITIL for Process Optimisation in the IT Service Centre of Harz University, exemplified in the Release Management Process: This paper details the use of the IT Infrastructure Library Framework (ITIL) for optimising process workflows in the IT Service Centre of Harz University in Wernigerode, Germany, exemplified by the Release Management Process. It is described, how, during the course of a special ITIL project, the As-Is-Status of the various original processes was documented as part of the process life cycle and then transformed in the To-Be-Status, according to the ITIL Best Practice Framework. It is also shown, how the ITIL framework fits into the four-layered-process model, that could be derived from interviews with the universities IT support staff, and how the various modified processes interconnect with each other to form a value chain. The paper highlights the final results of the project and gives an outlook on the future use of ITIL as a business modelling tool in the IT Service Centre of Harz University. It is currently being considered, whether the process model developed during the project could be used as a reference model for other university IT centres.<|reference_end|> | arxiv | @article{scheruhn2007the,
title={The Use of ITIL for Process Optimisation in the IT Service Centre of
Harz University, exemplified in the Release Management Process},
author={Hans-Juergen Scheruhn, Christian Reinboth, Thomas Habel},
journal={arXiv preprint arXiv:0705.2351},
year={2007},
archivePrefix={arXiv},
eprint={0705.2351},
primaryClass={cs.OH}
} | scheruhn2007the |
arxiv-384 | 0705.2435 | Reduced Complexity Sphere Decoding for Square QAM via a New Lattice Representation | <|reference_start|>Reduced Complexity Sphere Decoding for Square QAM via a New Lattice Representation: Sphere decoding (SD) is a low complexity maximum likelihood (ML) detection algorithm, which has been adapted for different linear channels in digital communications. The complexity of the SD has been shown to be exponential in some cases, and polynomial in others and under certain assumptions. The sphere radius and the number of nodes visited throughout the tree traversal search are the decisive factors for the complexity of the algorithm. The radius problem has been addressed and treated widely in the literature. In this paper, we propose a new structure for SD, which drastically reduces the overall complexity. The complexity is measured in terms of the floating point operations per second (FLOPS) and the number of nodes visited throughout the algorithm tree search. This reduction in the complexity is due to the ability of decoding the real and imaginary parts of each jointly detected symbol independently of each other, making use of the new lattice representation. We further show by simulations that the new approach achieves 80% reduction in the overall complexity compared to the conventional SD for a 2x2 system, and almost 50% reduction for the 4x4 and 6x6 cases, thus relaxing the requirements for hardware implementation.<|reference_end|> | arxiv | @article{azzam2007reduced,
title={Reduced Complexity Sphere Decoding for Square QAM via a New Lattice
Representation},
author={Luay Azzam and Ender Ayanoglu},
journal={arXiv preprint arXiv:0705.2435},
year={2007},
archivePrefix={arXiv},
eprint={0705.2435},
primaryClass={cs.IT math.IT}
} | azzam2007reduced |
arxiv-385 | 0705.2485 | Using Genetic Algorithms to Optimise Rough Set Partition Sizes for HIV Data Analysis | <|reference_start|>Using Genetic Algorithms to Optimise Rough Set Partition Sizes for HIV Data Analysis: In this paper, we present a method to optimise rough set partition sizes, to which rule extraction is performed on HIV data. The genetic algorithm optimisation technique is used to determine the partition sizes of a rough set in order to maximise the rough sets prediction accuracy. The proposed method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. Six demographic variables were used in the analysis, these variables are; race, age of mother, education, gravidity, parity, and age of father, with the outcome or decision being either HIV positive or negative. Rough set theory is chosen based on the fact that it is easy to interpret the extracted rules. The prediction accuracy of equal width bin partitioning is 57.7% while the accuracy achieved after optimising the partitions is 72.8%. Several other methods have been used to analyse the HIV data and their results are stated and compared to that of rough set theory (RST).<|reference_end|> | arxiv | @article{crossingham2007using,
title={Using Genetic Algorithms to Optimise Rough Set Partition Sizes for HIV
Data Analysis},
author={Bodie Crossingham and Tshilidzi Marwala},
journal={arXiv preprint arXiv:0705.2485},
year={2007},
archivePrefix={arXiv},
eprint={0705.2485},
primaryClass={cs.NE cs.AI q-bio.QM}
} | crossingham2007using |
arxiv-386 | 0705.2503 | Improved Approximability Result for Test Set with Small Redundancy | <|reference_start|>Improved Approximability Result for Test Set with Small Redundancy: Test set with redundancy is one of the focuses in recent bioinformatics research. Set cover greedy algorithm (SGA for short) is a commonly used algorithm for test set with redundancy. This paper proves that the approximation ratio of SGA can be $(2-\frac{1}{2r})\ln n+{3/2}\ln r+O(\ln\ln n)$ by using the potential function technique. This result is better than the approximation ratio $2\ln n$ which directly derives from set multicover, when $r=o(\frac{\ln n}{\ln\ln n})$, and is an extension of the approximability results for plain test set.<|reference_end|> | arxiv | @article{cui2007improved,
title={Improved Approximability Result for Test Set with Small Redundancy},
author={Peng Cui},
journal={arXiv preprint arXiv:0705.2503},
year={2007},
archivePrefix={arXiv},
eprint={0705.2503},
primaryClass={cs.DS cs.CC}
} | cui2007improved |
arxiv-387 | 0705.2516 | Condition Monitoring of HV Bushings in the Presence of Missing Data Using Evolutionary Computing | <|reference_start|>Condition Monitoring of HV Bushings in the Presence of Missing Data Using Evolutionary Computing: The work proposes the application of neural networks with particle swarm optimisation (PSO) and genetic algorithms (GA) to compensate for missing data in classifying high voltage bushings. The classification is done using DGA data from 60966 bushings based on IEEEc57.104, IEC599 and IEEE production rates methods for oil impregnated paper (OIP) bushings. PSO and GA were compared in terms of accuracy and computational efficiency. Both GA and PSO simulations were able to estimate missing data values to an average 95% accuracy when only one variable was missing. However PSO rapidly deteriorated to 66% accuracy with two variables missing simultaneously, compared to 84% for GA. The data estimated using GA was found to classify the conditions of bushings than the PSO.<|reference_end|> | arxiv | @article{dhlamini*2007condition,
title={Condition Monitoring of HV Bushings in the Presence of Missing Data
Using Evolutionary Computing},
author={Sizwe M. Dhlamini*, Fulufhelo V. Nelwamondo**, Tshilidzi Marwala**},
journal={arXiv preprint arXiv:0705.2516},
year={2007},
archivePrefix={arXiv},
eprint={0705.2516},
primaryClass={cs.NE cs.AI}
} | dhlamini*2007condition |
arxiv-388 | 0705.2535 | Informatics Carnot Machine | <|reference_start|>Informatics Carnot Machine: Based on Planck's blackbody equation it is argued that a single mode light pulse, with a large number of photons, carries one entropy unit. Similarly, an empty radiation mode carries no entropy. In this case, the calculated entropy that a coded sequence of light pulses is carrying is simply the Gibbs mixing entropy, which is identical to the logical Shannon information. This approach is supported by a demonstration that information transmission and amplification, by a sequence of light pulses in an optical fiber, is a classic Carnot machine comprising of two isothermals and two adiabatic. Therefore it is concluded that entropy under certain conditions is information.<|reference_end|> | arxiv | @article{kafri2007informatics,
title={Informatics Carnot Machine},
author={Oded Kafri},
journal={arXiv preprint arXiv:0705.2535},
year={2007},
archivePrefix={arXiv},
eprint={0705.2535},
primaryClass={cs.IT math.IT}
} | kafri2007informatics |
arxiv-389 | 0705.2604 | Computational Intelligence for Condition Monitoring | <|reference_start|>Computational Intelligence for Condition Monitoring: Condition monitoring techniques are described in this chapter. Two aspects of condition monitoring process are considered: (1) feature extraction; and (2) condition classification. Feature extraction methods described and implemented are fractals, Kurtosis and Mel-frequency Cepstral Coefficients. Classification methods described and implemented are support vector machines (SVM), hidden Markov models (HMM), Gaussian mixture models (GMM) and extension neural networks (ENN). The effectiveness of these features were tested using SVM, HMM, GMM and ENN on condition monitoring of bearings and are found to give good results.<|reference_end|> | arxiv | @article{marwala2007computational,
title={Computational Intelligence for Condition Monitoring},
author={Tshilidzi Marwala and Christina Busisiwe Vilakazi},
journal={arXiv preprint arXiv:0705.2604},
year={2007},
archivePrefix={arXiv},
eprint={0705.2604},
primaryClass={cs.CE}
} | marwala2007computational |
arxiv-390 | 0705.2626 | Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in hypre and PETSc | <|reference_start|>Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in hypre and PETSc: We describe our software package Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) publicly released recently. BLOPEX is available as a stand-alone serial library, as an external package to PETSc (``Portable, Extensible Toolkit for Scientific Computation'', a general purpose suite of tools for the scalable solution of partial differential equations and related problems developed by Argonne National Laboratory), and is also built into {\it hypre} (``High Performance Preconditioners'', scalable linear solvers package developed by Lawrence Livermore National Laboratory). The present BLOPEX release includes only one solver--the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method for symmetric eigenvalue problems. {\it hypre} provides users with advanced high-quality parallel preconditioners for linear systems, in particular, with domain decomposition and multigrid preconditioners. With BLOPEX, the same preconditioners can now be efficiently used for symmetric eigenvalue problems. PETSc facilitates the integration of independently developed application modules with strict attention to component interoperability, and makes BLOPEX extremely easy to compile and use with preconditioners that are available via PETSc. We present the LOBPCG algorithm in BLOPEX for {\it hypre} and PETSc. We demonstrate numerically the scalability of BLOPEX by testing it on a number of distributed and shared memory parallel systems, including a Beowulf system, SUN Fire 880, an AMD dual-core Opteron workstation, and IBM BlueGene/L supercomputer, using PETSc domain decomposition and {\it hypre} multigrid preconditioning. We test BLOPEX on a model problem, the standard 7-point finite-difference approximation of the 3-D Laplacian, with the problem size in the range $10^5-10^8$.<|reference_end|> | arxiv | @article{knyazev2007block,
title={Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in
hypre and PETSc},
author={A. V. Knyazev, M. E. Argentati, I. Lashuk, and E. E. Ovtchinnikov},
journal={SIAM Journal on Scientific Computing (SISC). 25(5): 2224-2239,
2007},
year={2007},
doi={10.1137/060661624},
number={UCDHSC-CCM-251},
archivePrefix={arXiv},
eprint={0705.2626},
primaryClass={cs.MS cs.NA}
} | knyazev2007block |
arxiv-391 | 0705.2765 | On the monotonization of the training set | <|reference_start|>On the monotonization of the training set: We consider the problem of minimal correction of the training set to make it consistent with monotonic constraints. This problem arises during analysis of data sets via techniques that require monotone data. We show that this problem is NP-hard in general and is equivalent to finding a maximal independent set in special orgraphs. Practically important cases of that problem considered in detail. These are the cases when a partial order given on the replies set is a total order or has a dimension 2. We show that the second case can be reduced to maximization of a quadratic convex function on a convex set. For this case we construct an approximate polynomial algorithm based on convex optimization.<|reference_end|> | arxiv | @article{takhanov2007on,
title={On the monotonization of the training set},
author={Rustem Takhanov},
journal={arXiv preprint arXiv:0705.2765},
year={2007},
archivePrefix={arXiv},
eprint={0705.2765},
primaryClass={cs.LG cs.AI}
} | takhanov2007on |
arxiv-392 | 0705.2786 | Virtualization: A double-edged sword | <|reference_start|>Virtualization: A double-edged sword: Virtualization became recently a hot topic once again, after being dormant for more than twenty years. In the meantime, it has been almost forgotten, that virtual machines are not so perfect isolating environments as it seems, when looking at the principles. These lessons were already learnt earlier when the first virtualized systems have been exposed to real life usage. Contemporary virtualization software enables instant creation and destruction of virtual machines on a host, live migration from one host to another, execution history manipulation, etc. These features are very useful in practice, but also causing headaches among security specialists, especially in current hostile network environments. In the present contribution we discuss the principles, potential benefits and risks of virtualization in a deja vu perspective, related to previous experiences with virtualization in the mainframe era.<|reference_end|> | arxiv | @article{wlodarz2007virtualization:,
title={Virtualization: A double-edged sword},
author={Joachim J. Wlodarz},
journal={arXiv preprint arXiv:0705.2786},
year={2007},
archivePrefix={arXiv},
eprint={0705.2786},
primaryClass={cs.OS cs.CR}
} | wlodarz2007virtualization: |
arxiv-393 | 0705.2787 | Worst-Case Background Knowledge for Privacy-Preserving Data Publishing | <|reference_start|>Worst-Case Background Knowledge for Privacy-Preserving Data Publishing: Recent work has shown the necessity of considering an attacker's background knowledge when reasoning about privacy in data publishing. However, in practice, the data publisher does not know what background knowledge the attacker possesses. Thus, it is important to consider the worst-case. In this paper, we initiate a formal study of worst-case background knowledge. We propose a language that can express any background knowledge about the data. We provide a polynomial time algorithm to measure the amount of disclosure of sensitive information in the worst case, given that the attacker has at most a specified number of pieces of information in this language. We also provide a method to efficiently sanitize the data so that the amount of disclosure in the worst case is less than a specified threshold.<|reference_end|> | arxiv | @article{martin2007worst-case,
title={Worst-Case Background Knowledge for Privacy-Preserving Data Publishing},
author={David J. Martin, Daniel Kifer, Ashwin Machanavajjhala, Johannes
Gehrke, Joseph Y. Halpern},
journal={arXiv preprint arXiv:0705.2787},
year={2007},
archivePrefix={arXiv},
eprint={0705.2787},
primaryClass={cs.DB}
} | martin2007worst-case |
arxiv-394 | 0705.2807 | The poset metrics that allow binary codes of codimension m to be m-, (m-1)-, or (m-2)-perfect | <|reference_start|>The poset metrics that allow binary codes of codimension m to be m-, (m-1)-, or (m-2)-perfect: A binary poset code of codimension M (of cardinality 2^{N-M}, where N is the code length) can correct maximum M errors. All possible poset metrics that allow codes of codimension M to be M-, (M-1)- or (M-2)-perfect are described. Some general conditions on a poset which guarantee the nonexistence of perfect poset codes are derived; as examples, we prove the nonexistence of R-perfect poset codes for some R in the case of the crown poset and in the case of the union of disjoin chains. Index terms: perfect codes, poset codes<|reference_end|> | arxiv | @article{kim2007the,
title={The poset metrics that allow binary codes of codimension m to be m-,
(m-1)-, or (m-2)-perfect},
author={Hyun Kwang Kim (Pohang University of Science and Technology, South
Korea), Denis Krotov (Sobolev Institute of Mathematics, Novosibirsk, Russia)},
journal={IEEE Trans. Inf. Theory 54(11) 2008, 5241-5246},
year={2007},
doi={10.1109/TIT.2008.929972},
archivePrefix={arXiv},
eprint={0705.2807},
primaryClass={math.CO cs.DM}
} | kim2007the |
arxiv-395 | 0705.2819 | An Autonomous Distributed Admission Control Scheme for IEEE 80211 DCF | <|reference_start|>An Autonomous Distributed Admission Control Scheme for IEEE 80211 DCF: Admission control as a mechanism for providing QoS requires an accurate description of the requested flow as well as already admitted flows. Since 802.11 WLAN capacity is shared between flows belonging to all stations, admission control requires knowledge of all flows in the WLAN. Further, estimation of the load-dependent WLAN capacity through analytical model requires inputs about channel data rate, payload size and the number of stations. These factors combined point to a centralized admission control whereas for 802.11 DCF it is ideally performed in a distributed manner. The use of measurements from the channel avoids explicit inputs about the state of the channel described above. BUFFET, a model based measurement-assisted distributed admission control scheme for DCF proposed in this paper relies on measurements to derive model inputs and predict WLAN saturation, thereby maintaining average delay within acceptable limits. Being measurement based, it adapts to a combination of data rates and payload sizes, making it completely autonomous and distributed. Performance analysis using OPNET simulations suggests that BUFFET is able to ensure average delay under 7ms at a near-optimal throughput.<|reference_end|> | arxiv | @article{patil2007an,
title={An Autonomous Distributed Admission Control Scheme for IEEE 802.11 DCF},
author={Preetam Patil, Varsha Apte (Department of CSE, IIT-Bombay, India)},
journal={arXiv preprint arXiv:0705.2819},
year={2007},
archivePrefix={arXiv},
eprint={0705.2819},
primaryClass={cs.NI cs.PF}
} | patil2007an |
arxiv-396 | 0705.2835 | Voronoi Diagram of Polygonal Chains under the Discrete Fr\'echet Distance | <|reference_start|>Voronoi Diagram of Polygonal Chains under the Discrete Fr\'echet Distance: Polygonal chains are fundamental objects in many applications like pattern recognition and protein structure alignment. A well-known measure to characterize the similarity of two polygonal chains is the famous Fr\`{e}chet distance. In this paper, for the first time, we consider the Voronoi diagram of polygonal chains in $d$-dimension ($d=2,3$) under the discrete Fr\`{e}chet distance. Given $n$ polygonal chains ${\cal C}$ in $d$-dimension ($d=2,3$), each with at most $k$ vertices, we prove fundamental properties of such a Voronoi diagram {\em VD}$_F({\cal C})$ by presenting the first known upper and lower bounds for {\em VD}$_F({\cal C})$.<|reference_end|> | arxiv | @article{bereg2007voronoi,
title={Voronoi Diagram of Polygonal Chains under the Discrete Fr\'echet
Distance},
author={Sergey Bereg, Marina Gavrilova and Binhai Zhu},
journal={arXiv preprint arXiv:0705.2835},
year={2007},
archivePrefix={arXiv},
eprint={0705.2835},
primaryClass={cs.CG cs.CC}
} | bereg2007voronoi |
arxiv-397 | 0705.2847 | Capacity of Sparse Multipath Channels in the Ultra-Wideband Regime | <|reference_start|>Capacity of Sparse Multipath Channels in the Ultra-Wideband Regime: This paper studies the ergodic capacity of time- and frequency-selective multipath fading channels in the ultrawideband (UWB) regime when training signals are used for channel estimation at the receiver. Motivated by recent measurement results on UWB channels, we propose a model for sparse multipath channels. A key implication of sparsity is that the independent degrees of freedom (DoF) in the channel scale sub-linearly with the signal space dimension (product of signaling duration and bandwidth). Sparsity is captured by the number of resolvable paths in delay and Doppler. Our analysis is based on a training and communication scheme that employs signaling over orthogonal short-time Fourier (STF) basis functions. STF signaling naturally relates sparsity in delay-Doppler to coherence in time-frequency. We study the impact of multipath sparsity on two fundamental metrics of spectral efficiency in the wideband/low-SNR limit introduced by Verdu: first- and second-order optimality conditions. Recent results by Zheng et. al. have underscored the large gap in spectral efficiency between coherent and non-coherent extremes and the importance of channel learning in bridging the gap. Building on these results, our results lead to the following implications of multipath sparsity: 1) The coherence requirements are shared in both time and frequency, thereby significantly relaxing the required scaling in coherence time with SNR; 2) Sparse multipath channels are asymptotically coherent -- for a given but large bandwidth, the channel can be learned perfectly and the coherence requirements for first- and second-order optimality met through sufficiently large signaling duration; and 3) The requirement of peaky signals in attaining capacity is eliminated or relaxed in sparse environments.<|reference_end|> | arxiv | @article{raghavan2007capacity,
title={Capacity of Sparse Multipath Channels in the Ultra-Wideband Regime},
author={Vasanthan Raghavan, Gautham Hariharan and Akbar Sayeed},
journal={arXiv preprint arXiv:0705.2847},
year={2007},
doi={10.1109/JSTSP.2007.906666},
archivePrefix={arXiv},
eprint={0705.2847},
primaryClass={cs.IT math.IT}
} | raghavan2007capacity |
arxiv-398 | 0705.2848 | Non-Coherent Capacity and Reliability of Sparse Multipath Channels in the Wideband Regime | <|reference_start|>Non-Coherent Capacity and Reliability of Sparse Multipath Channels in the Wideband Regime: In contrast to the prevalent assumption of rich multipath in information theoretic analysis of wireless channels, physical channels exhibit sparse multipath, especially at large bandwidths. We propose a model for sparse multipath fading channels and present results on the impact of sparsity on non-coherent capacity and reliability in the wideband regime. A key implication of sparsity is that the statistically independent degrees of freedom in the channel, that represent the delay-Doppler diversity afforded by multipath, scale at a sub-linear rate with the signal space dimension (time-bandwidth product). Our analysis is based on a training-based communication scheme that uses short-time Fourier (STF) signaling waveforms. Sparsity in delay-Doppler manifests itself as time-frequency coherence in the STF domain. From a capacity perspective, sparse channels are asymptotically coherent: the gap between coherent and non-coherent extremes vanishes in the limit of large signal space dimension without the need for peaky signaling. From a reliability viewpoint, there is a fundamental tradeoff between channel diversity and learnability that can be optimized to maximize the error exponent at any rate by appropriately choosing the signaling duration as a function of bandwidth.<|reference_end|> | arxiv | @article{hariharan2007non-coherent,
title={Non-Coherent Capacity and Reliability of Sparse Multipath Channels in
the Wideband Regime},
author={Gautham Hariharan and Akbar Sayeed},
journal={arXiv preprint arXiv:0705.2848},
year={2007},
archivePrefix={arXiv},
eprint={0705.2848},
primaryClass={cs.IT math.IT}
} | hariharan2007non-coherent |
arxiv-399 | 0705.2854 | Scanning and Sequential Decision Making for Multi-Dimensional Data - Part II: the Noisy Case | <|reference_start|>Scanning and Sequential Decision Making for Multi-Dimensional Data - Part II: the Noisy Case: We consider the problem of sequential decision making on random fields corrupted by noise. In this scenario, the decision maker observes a noisy version of the data, yet judged with respect to the clean data. In particular, we first consider the problem of sequentially scanning and filtering noisy random fields. In this case, the sequential filter is given the freedom to choose the path over which it traverses the random field (e.g., noisy image or video sequence), thus it is natural to ask what is the best achievable performance and how sensitive this performance is to the choice of the scan. We formally define the problem of scanning and filtering, derive a bound on the best achievable performance and quantify the excess loss occurring when non-optimal scanners are used, compared to optimal scanning and filtering. We then discuss the problem of sequential scanning and prediction of noisy random fields. This setting is a natural model for applications such as restoration and coding of noisy images. We formally define the problem of scanning and prediction of a noisy multidimensional array and relate the optimal performance to the clean scandictability defined by Merhav and Weissman. Moreover, bounds on the excess loss due to sub-optimal scans are derived, and a universal prediction algorithm is suggested. This paper is the second part of a two-part paper. The first paper dealt with sequential decision making on noiseless data arrays, namely, when the decision maker is judged with respect to the same data array it observes.<|reference_end|> | arxiv | @article{cohen2007scanning,
title={Scanning and Sequential Decision Making for Multi-Dimensional Data -
Part II: the Noisy Case},
author={Asaf Cohen, Tsachy Weissman and Neri Merhav},
journal={arXiv preprint arXiv:0705.2854},
year={2007},
archivePrefix={arXiv},
eprint={0705.2854},
primaryClass={cs.IT cs.CV math.IT}
} | cohen2007scanning |
arxiv-400 | 0705.2862 | Cryptanalysis of group-based key agreement protocols using subgroup distance functions | <|reference_start|>Cryptanalysis of group-based key agreement protocols using subgroup distance functions: We introduce a new approach for cryptanalysis of key agreement protocols based on noncommutative groups. This approach uses functions that estimate the distance of a group element to a given subgroup. We test it against the Shpilrain-Ushakov protocol, which is based on Thompson's group F.<|reference_end|> | arxiv | @article{ruinskiy2007cryptanalysis,
title={Cryptanalysis of group-based key agreement protocols using subgroup
distance functions},
author={Dima Ruinskiy, Adi Shamir, and Boaz Tsaban},
journal={Proceedings of the 10th International Conference on Practice and
Theory in Public-Key Cryptography PKC07, Lecture Notes In Computer Science
4450 (2007), 61--75},
year={2007},
doi={10.1007/978-3-540-71677-8_5},
archivePrefix={arXiv},
eprint={0705.2862},
primaryClass={cs.CR}
} | ruinskiy2007cryptanalysis |