Query Text
stringlengths
10
59.9k
Ranking 1
stringlengths
10
4.53k
Ranking 2
stringlengths
10
50.9k
Ranking 3
stringlengths
10
6.78k
Ranking 4
stringlengths
10
59.9k
Ranking 5
stringlengths
10
6.78k
Ranking 6
stringlengths
10
59.9k
Ranking 7
stringlengths
10
59.9k
Ranking 8
stringlengths
10
6.78k
Ranking 9
stringlengths
10
59.9k
Ranking 10
stringlengths
10
50.9k
Ranking 11
stringlengths
13
6.78k
Ranking 12
stringlengths
14
50.9k
Ranking 13
stringlengths
24
2.74k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.07
score_8
float64
0
0.03
score_9
float64
0
0.01
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Equational Languages
The Vienna Definition Language
General formulation of formal grammars By extracting the basic properties common to the formal grammars appeared in existing literatures, we develop a general formulation of formal grammars. We define a pseudo grammar and derive from it the well-known probabilistic, fuzzy grammars and so on. Moreover, several interesting grammars such as ⊔∗ grammars, ⊔ ⊓ grammars, ⊔ ⊓ grammars, composite B-fuzzy grammars, and mixed fuzzy grammars, which have never appeared in any other papers before, are derived.
Matrix Equations and Normal Forms for Context-Free Grammars The relationship between the set of productions of a context-free grammar and the corresponding set of defining equations is first pointed out. The closure operation on a matrix of strings is defined and this concept is used to formalize the solution to a set of linear equations. A procedure is then given for rewriting a context-free grammar in Greibach normal form, where the replacements string of each production begins with a terminal symbol. An additional procedure is given for rewriting the grammar so that each replacement string both begins and ends with a terminal symbol. Neither procedure requires the evaluation of regular begins and ends with a terminal symbol. Neither procedure requires the evaluation of regular expressions over the total vocabulary of the grammar, as is required by Greibach's procedure.
Fuzzy Algorithms
A Definition of a Nonprobabilistic Entropy in the Setting of Fuzzy Sets Theory
Dynamic system modeling using a recurrent interval-valued fuzzy neural network and its hardware implementation This paper first proposes a new recurrent interval-valued fuzzy neural network (RIFNN) for dynamic system modeling. A new hardware implementation technique for the RIFNN using a field-programmable gate array (FPGA) chip is then proposed. The antecedent and consequent parts in an RIFNN use interval-valued fuzzy sets in order to increase the network noise resistance ability. A new recurrent structure is proposed in RIFNN, with the recurrent loops enabling it to handle dynamic system processing problems. An RIFNN is constructed from structure and parameter learning. For hardware implementation of the RIFNN, the pipeline technique and a new circuit for type-reduction operation are proposed to improve the chip performance. Simulations and comparisons with various feedforward and recurrent fuzzy neural networks verify the performance of the RIFNN under noisy conditions.
Development of a type-2 fuzzy proportional controller Studies have shown that PID controllers can be realized by type-1 (conventional) fuzzy logic systems (FLSs). However, the input-output mappings of such fuzzy PID controllers are fixed. The control performance would, therefore, vary if the system parameters are uncertain. This paper aims at developing a type-2 FLS to control a process whose parameters are uncertain. A method for designing type-2 triangular membership functions with the desired generalized centroid is first proposed. By using this type-2 fuzzy set to partition the output domain, a type-2 fuzzy proportional controller is obtained. It is shown that the type-2 fuzzy logic system is equivalent to a proportional controller that may assume a range of gains. Simulation results are presented to demonstrate that the performance of the proposed controller can be maintained even when the system parameters deviate from their nominal values.
A hybrid multi-criteria decision-making model for firms competence evaluation In this paper, we present a hybrid multi-criteria decision-making (MCDM) model to evaluate the competence of the firms. According to the competence-based theory reveals that firm competencies are recognized from exclusive and unique capabilities that each firm enjoy in marketplace and are tightly intertwined within different business functions throughout the company. Therefore, competence in the firm is a composite of various attributes. Among them many intangible and tangible attributes are difficult to measure. In order to overcome the issue, we invite fuzzy set theory into the measurement of performance. In this paper first we calculate the weight of each criterion through adaptive analytic hierarchy process (AHP) approach (A^3) method, and then we appraise the performance of firms via linguistic variables which are expressed as trapezoidal fuzzy numbers. In the next step we transform these fuzzy numbers into interval data by means of @a-cut. Then considering different values for @a we rank the firms through TOPSIS method with interval data. Since there are different ranks for different @a values, we apply linear assignment method to obtain final rank for alternatives.
Fuzzy decision making with immediate probabilities We developed a new decision-making model with probabilistic information and used the concept of the immediate probability to aggregate the information. This type of probability modifies the objective probability by introducing the attitudinal character of the decision maker. In doing so, we use the ordered weighting average (OWA) operator. When using this model, it is assumed that the information is given by exact numbers. However, this may not be the real situation found within the decision-making problem. Sometimes, the information is vague or imprecise and it is necessary to use another approach to assess the information, such as the use of fuzzy numbers. Then, the decision-making problem can be represented more completely because we now consider the best and worst possible scenarios, along with the possibility that some intermediate event (an internal value) will occur. We will use the fuzzy ordered weighted averaging (FOWA) operator to aggregate the information with the probabilities. As a result, we will get the Immediate Probability-FOWA (IP-FOWA) operator. We will study some of its main properties. We will apply the new approach in a decision-making problem about selection of strategies.
The algebra of fuzzy truth values The purpose of this paper is to give a straightforward mathematical treatment of algebras of fuzzy truth values for type-2 fuzzy sets.
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
A fuzzy logic system for the detection and recognition of handwritten street numbers Fuzzy logic is applied to the problem of locating and reading street numbers in digital images of handwritten mail. A fuzzy rule-based system is defined that uses uncertain information provided by image processing and neural network-based character recognition modules to generate multiple hypotheses with associated confidence values for the location of the street number in an image of a handwritten address. The results of a blind test of the resultant system are presented to demonstrate the value of this new approach. The results are compared to those obtained using a neural network trained with backpropagation. The fuzzy logic system achieved higher performance rates
A possibilistic approach to the modeling and resolution of uncertain closed-loop logistics Closed-loop logistics planning is an important tactic for the achievement of sustainable development. However, the correlation among the demand, recovery, and landfilling makes the estimation of their rates uncertain and difficult. Although the fuzzy numbers can present such kinds of overlapping phenomena, the conventional method of defuzzification using level-cut methods could result in the loss of information. To retain complete information, the possibilistic approach is adopted to obtain the possibilistic mean and mean square imprecision index (MSII) of the shortage and surplus for uncertain factors. By applying the possibilistic approach, a multi-objective, closed-loop logistics model considering shortage and surplus is formulated. The two objectives are to reduce both the total cost and the root MSII. Then, a non-dominated solution can be obtained to support decisions with lower perturbation and cost. Also, the information on prediction interval can be obtained from the possibilistic mean and root MSII to support the decisions in the uncertain environment. This problem is non-deterministic polynomial-time hard, so a new algorithm based on the spanning tree-based genetic algorithm has been developed. Numerical experiments have shown that the proposed algorithm can yield comparatively efficient and accurate results.
1.200022
0.200022
0.200022
0.200022
0.066689
0.006263
0.000033
0.000026
0.000023
0.000019
0.000014
0
0
0
A Stochastic Computational Approach for Accurate and Efficient Reliability Evaluation Reliability is fast becoming a major concern due to the nanometric scaling of CMOS technology. Accurate analytical approaches for the reliability evaluation of logic circuits, however, have a computational complexity that generally increases exponentially with circuit size. This makes intractable the reliability analysis of large circuits. This paper initially presents novel computational models based on stochastic computation; using these stochastic computational models (SCMs), a simulation-based analytical approach is then proposed for the reliability evaluation of logic circuits. In this approach, signal probabilities are encoded in the statistics of random binary bit streams and non-Bernoulli sequences of random permutations of binary bits are used for initial input and gate error probabilities. By leveraging the bit-wise dependencies of random binary streams, the proposed approach takes into account signal correlations and evaluates the joint reliability of multiple outputs. Therefore, it accurately determines the reliability of a circuit; its precision is only limited by the random fluctuations inherent in the stochastic sequences. Based on both simulation and analysis, the SCM approach takes advantages of ease in implementation and accuracy in evaluation. The use of non-Bernoulli sequences as initial inputs further increases the evaluation efficiency and accuracy compared to the conventional use of Bernoulli sequences, so the proposed stochastic approach is scalable for analyzing large circuits. It can further account for various fault models as well as calculating the soft error rate (SER). These results are supported by extensive simulations and detailed comparison with existing approaches.
Probabilistic error modeling for nano-domain logic circuits In nano-domain logic circuits, errors generated are transient in nature and will arise due to the uncertainty or the unreliability of the computing element itself. This type of errors--which we refer to as dynamic errors--are to be distinguished from traditional faults and radiation related errors. Due to these highly likely dynamic errors, it is more appropriate to model nano-domain computing as probabilistic rather than deterministic. We propose a probabilistic error model based on Bayesian networks to estimate this expected output error probability, given dynamic error probabilities in each device since this estimate is crucial for nano-domain circuit designers to be able to compare and rank designs based on the expected output error. We estimate the overall output error probability by comparing the outputs of a dynamic error-encoded model with an ideal logic model. We prove that this probabilistic framework is a compact and minimal representation of the overall effect of dynamic errors in a circuit. We use both exact and approximate Bayesian inference schemes for propagation of probabilities. The exact inference shows better time performance than the state-of-the art by exploiting conditional independencies exhibited in the underlying probabilistic framework. However, exact inference is worst case NP-hard and can handle only small circuits. Hence, we use two approximate inference schemes for medium size benchmarks. We demonstrate the efficiency and accuracy of these approximate inference schemes by comparing estimated results with logic simulation results. We have performed our experiments on LGSynth'93 and ISCAS'85 benchmark circuits. We explore our probabilistic model to calculate: 1) error sensitivity of individual gates in a circuit; 2) compute overall exact error probabilities for small circuits; 3) compute approximate error probabilities for medium sized benchmarks using two stochastic sampling schemes; 4) compare and vet design with respect to dynamic errors; 5) characterize the input space for desired output characteristics by utilizing the unique backtracking capability of Bayesian networks (inverse problem); and 6) to apply selective redundancy to highly sensitive nodes for error tolerant designs.
Stochastic computational models for accurate reliability evaluation of logic circuits As reliability becomes a major concern with the continuous scaling of CMOS technology, several computational methodologies have been developed for the reliability evaluation of logic circuits. Previous accurate analytical approaches, however, have a computational complexity that generally increases exponentially with the size of a circuit, making the evaluation of large circuits intractable. This paper presents novel computational models based on stochastic computation, in which probabilities are encoded in the statistics of random binary bit streams, for the reliability evaluation of logic circuits. A computational approach using the stochastic computational models (SCMs) accurately determines the reliability of a circuit with its precision only limited by the random fluctuations inherent in the representation of random binary bit streams. The SCM approach has a linear computational complexity and is therefore scalable for use for any large circuits. Our simulation results demonstrate the accuracy and scalability of the SCM approach, and suggest its possible applications in VLSI design.
Probabilistic transfer matrices in symbolic reliability analysis of logic circuits We propose the probabilistic transfer matrix (PTM) framework to capture nondeterministic behavior in logic circuits. PTMs provide a concise description of both normal and faulty behavior, and are well-suited to reliability and error susceptibility calculations. A few simple composition rules based on connectivity can be used to recursively build larger PTMs (representing entire logic circuits) from smaller gate PTMs. PTMs for gates in series are combined using matrix multiplication, and PTMs for gates in parallel are combined using the tensor product operation. PTMs can accurately calculate joint output probabilities in the presence of reconvergent fanout and inseparable joint input distributions. To improve computational efficiency, we encode PTMs as algebraic decision diagrams (ADDs). We also develop equivalent ADD algorithms for newly defined matrix operations such as eliminate_variables and eliminate_redundant_variables, which aid in the numerical computation of circuit PTMs. We use PTMs to evaluate circuit reliability and derive polynomial approximations for circuit error probabilities in terms of gate error probabilities. PTMs can also analyze the effects of logic and electrical masking on error mitigation. We show that ignoring logic masking can overestimate errors by an order of magnitude. We incorporate electrical masking by computing error attenuation probabilities, based on analytical models, into an extended PTM framework for reliability computation. We further define a susceptibility measure to identify gates whose errors are not well masked. We show that hardening a few gates can significantly improve circuit reliability.
A Probabilistic-Based Design Methodology for Nanoscale Computation As current silicon-based techniques fast approach their practicallimits, the investigation of nanoscale electronics, devices andsystem architectures becomes a central research priority. It is expectedthat nanoarchitectures will confront devices and interconnectionswith high inherent defect rates, which motivates the searchfor new architectural paradigms.In this paper, we propose a probabilistic-based design methodologyfor designing nanoscale computer architectures based onMarkov Random Fields (MRF). The MRF can express arbitrarylogic circuits and logic operation is achieved by maximizing theprobability of state configurations in the logic network. Maximizingstate probability is equivalent to minimizing a form of energythat depends on neighboring nodes in the network. Once we developa library of elementary logic components, we can link themtogether to build desired architectures based on the belief propagationalgorithm. Belief propagation is a way of organizing theglobal computation of marginal belief in terms of smaller localcomputations. We will illustrate the proposed design methodologywith some elementary logic examples.
Statistical timing based on incomplete probabilistic descriptions of parameter uncertainty Existing approaches to timing analysis under uncertainty are based on restrictive assumptions. Statistical STA techniques assume that the full probabilistic distribution of parameter uncertainty is available; in reality, the complete probabilistic description often cannot be obtained. In this paper, a new paradigm for parameter uncertainty description is proposed as a way to consistently and rigorously handle partially available descriptions of parameter uncertainty. The paradigm is based on a theory of interval probabilistic models that permit handling uncertainty that is described in a distribution-free mode - just via the range, the mean, and the variance. This permits effectively handling multiple real-life challenges, including imprecise and limited information about the distributions of process parameters, parameters coming from different populations, and the sources of uncertainty that are too difficult to handle via full probabilistic measures (e.g. on-chip supply voltage variation). Specifically, analytical techniques for bounding the distributions of probabilistic interval variables are proposed. Besides, a provably correct strategy for fast Monte Carlo simulation based on probabilistic interval variables is introduced. A path-based timing algorithm implementing the novel modeling paradigm, as well as handling the traditional variability descriptions, has been developed. The results indicate the proposed algorithm can improve the upper bound of the 90(th)-percentile circuit delay, on average, by 5.3% across the ISCAS'85 benchmark circuits, compared to the worst-case timing estimates that use only the interval information of the partially specified parameters.
Some Defects in Finite-Difference Edge Finders This work illustrates and explains various artifacts in the output of five finite difference edge finders, those of J.F. Canny (1983, 1986), R.A. Boie et al. (1986) and R.A. Boie and I.J. Cox (1987), and three variations on that of D. Marr and E.C. Hildreth (1980), reimplemented with a common output format and method of noise suppression. These artifacts include gaps in boundaries, spurious boundaries, and deformation of region shape.
A Tutorial on Support Vector Machines for Pattern Recognition The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
Reconstruction of a low-rank matrix in the presence of Gaussian noise. This paper addresses the problem of reconstructing a low-rank signal matrix observed with additive Gaussian noise. We first establish that, under mild assumptions, one can restrict attention to orthogonally equivariant reconstruction methods, which act only on the singular values of the observed matrix and do not affect its singular vectors. Using recent results in random matrix theory, we then propose a new reconstruction method that aims to reverse the effect of the noise on the singular value decomposition of the signal matrix. In conjunction with the proposed reconstruction method we also introduce a Kolmogorov–Smirnov based estimator of the noise variance.
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
Compressive sampling for streaming signals with sparse frequency content Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Stream- ing Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and im- plementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for real-time system im- plementations. Our experiment results on very long signals demonstrate the good performance of the SGP and validate our approach.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Fuzzy concepts and formal methods: some illustrative examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However, some system problems, particularly those drawn from the information systems (IS) problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic toolkit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, and soft pre-conditions and system requirements as a series of linguistically quantified propositions.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.1
0.066667
0.066667
0.033333
0.003448
0
0
0
0
0
0
0
0
0
A survey on distributed compressed sensing: theory and applications The compressed sensing (CS) theory makes sample rate relate to signal structure and content. CS samples and compresses the signal with far below Nyquist sampling frequency simultaneously. However, CS only considers the intra-signal correlations, without taking the correlations of the multi-signals into account. Distributed compressed sensing (DCS) is an extension of CS that takes advantage of both the inter- and intra-signal correlations, which is wildly used as a powerful method for the multi-signals sensing and compression in many fields. In this paper, the characteristics and related works of DCS are reviewed. The framework of DCS is introduced. As DCS's main portions, sparse representation, measurement matrix selection, and joint reconstruction are classified and summarized. The applications of DCS are also categorized and discussed. Finally, the conclusion remarks and the further research works are provided.
Block-Sparse Signals: Uncertainty Relations And Efficient Recovery We consider efficient methods for the recovery of block-sparse signals-i.e., sparse signals that have nonzero entries occurring in clusters-from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce. We then show that a block-version of the orthogonal matching pursuit algorithm recovers block k-sparse signals in no more than k steps if the block-coherence is sufficiently small. The same condition on block-coherence is shown to guarantee successful recovery through a mixed l(2)/l(1)-optimization approach. This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
Block-sparse signals: uncertainty relations and efficient recovery We consider efficient methods for the recovery of block-sparse signals--i.e., sparse signals that have nonzero entries occurring in clusters--from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce. We then show that a block-version of the orthogonal matching pursuit algorithm recovers block k-sparse signals in no more than k steps if the block-coherence is sufficiently small. The same condition on block-coherence is shown to guarantee successful recovery through a mixed l2/l1-optimization approach. This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
Decoding by linear programming This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f∈Rn from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1-minimization problem (||x||ℓ1:=Σi|xi|) min(g∈Rn) ||y - Ag||ℓ1 provided that the support of the vector of errors is not too large, ||e||ℓ0:=|{i:ei ≠ 0}|≤ρ·m for some ρ0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of ℓ1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
Parameterized block-based statistical timing analysis with non-Gaussian parameters, nonlinear delay functions Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do not consider explicit gate delay dependence on process parameters (Liou, et al., 2001), (Orshansky, et al., 2002), (Devgan, et al., 2003), (Agarwal, et al., 2003) or restrict analysis to linear Gaussian parameters only (Visweswariah, et al., 2004), (Chang, et al., 2003). Here the authors extended the capabilities of parameterized block-based statistical STA (Visweswariah, et al., 2004) to handle nonlinear function of delays and non-Gaussian parameters, while retaining maximum efficiency of processing linear Gaussian parameters. The novel technique improves accuracy in predicting circuit timing characteristics and retains such benefits of parameterized block-based statistical STA as an incremental mode of operation, computation of criticality probabilities and sensitivities to process parameter variations. The authors' technique was implemented in an industrial statistical timing analysis tool. The experiments with large digital blocks showed both efficiency and accuracy of the proposed technique.
Fuzzy set methods for qualitative and natural language oriented simulation The author discusses the approach of using fuzzy set theory to create a formal way of viewing the qualitative simulation of models whose states, inputs, outputs, and parameters are uncertain. Simulation was performed using detailed and accurate models, and it was shown how input and output trajectories could reflect linguistic (or qualitative) changes in a system. Uncertain variables are encoded using triangular fuzzy numbers, and three distinct fuzzy simulation approaches (Monte Carlo, correlated and uncorrelated) are defined. The methods discussed are also valid for discrete event simulation; experiments have been performed on the fuzzy simulation of a single server queuing model. In addition, an existing C-based simulation toolkit, SimPack, was augmented to include the capabilities for modeling using fuzzy arithmetic and linguistic association, and a C++ class definition was coded for fuzzy number types
Compressed Remote Sensing of Sparse Objects The linear inverse source and scattering problems are studied from the perspective of compressed sensing. By introducing the sensor as well as target ensembles, the maximum number of recoverable targets is proved to be at least proportional to the number of measurement data modulo a log-square factor with overwhelming probability. Important contributions include the discoveries of the threshold aperture, consistent with the classical Rayleigh criterion, and the incoherence effect induced by random antenna locations. The predictions of theorems are confirmed by numerical simulations.
Statistical timing analysis for intra-die process variations with spatial correlations Process variations have become a critical issue in performance verification of high-performance designs. We present a new, statistical timing analysis method that accounts for inter- and intra-die process variations and their spatial correlations. Since statistical timing analysis has an exponential run time complexity, we propose a method whereby a statistical bound on the probability distribution function of the exact circuit delay is computed with linear run time. First, we develop a model for representing inter- and intra-die variations and their spatial correlations. Using this model, we then show how gate delays and arrival times can be represented as a sum of components, such that the correlation information between arrival times and gate delays is preserved. We then show how arrival times are propagated and merged in the circuit to obtain an arrival time distribution that is an upper bound on the distribution of the exact circuit delay. We prove the correctness of the bound and also show how the bound can be improved by propagating multiple arrival times. The proposed algorithms were implemented and tested on a set of benchmark circuits under several process variation scenarios. The results were compared with Monte Carlo simulation and show an accuracy of 3.32% on average over all test cases.
Ranking type-2 fuzzy numbers Type-2 fuzzy sets are a generalization of the ordinary fuzzy sets in which each type-2 fuzzy set is characterized by a fuzzy membership function. In this paper, we consider the problem of ranking a set of type-2 fuzzy numbers. We adopt a statistical viewpoint and interpret each type-2 fuzzy number as an ensemble of ordinary fuzzy numbers. This enables us to define a type-2 fuzzy rank and a type-2 rank uncertainty for each intuitionistic fuzzy number. We show the reasonableness of the results obtained by examining several test cases
Stability and Instance Optimality for Gaussian Measurements in Compressed Sensing In compressed sensing, we seek to gain information about a vector x∈ℝN from d ≪ N nonadaptive linear measurements. Candes, Donoho, Tao et al. (see, e.g., Candes, Proc. Intl. Congress Math., Madrid, 2006; Candes et al., Commun. Pure Appl. Math. 59:1207–1223, 2006; Donoho, IEEE Trans. Inf. Theory 52:1289–1306, 2006) proposed to seek a good approximation to x via ℓ 1 minimization. In this paper, we show that in the case of Gaussian measurements, ℓ 1 minimization recovers the signal well from inaccurate measurements, thus improving the result from Candes et al. (Commun. Pure Appl. Math. 59:1207–1223, 2006). We also show that this numerically friendly algorithm (see Candes et al., Commun. Pure Appl. Math. 59:1207–1223, 2006) with overwhelming probability recovers the signal with accuracy, comparable to the accuracy of the best k-term approximation in the Euclidean norm when k∼d/ln N.
On Generalized Induced Linguistic Aggregation Operators In this paper, we define various generalized induced linguistic aggregation operators, including eneralized induced linguistic ordered weighted averaging (GILOWA) operator, generalized induced linguistic ordered weighted geometric (GILOWG) operator, generalized induced uncertain linguistic ordered weighted averaging (GIULOWA) operator, generalized induced uncertain linguistic ordered weighted geometric (GIULOWG) operator, etc. Each object processed by these operators consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are linguistic variables (or uncertain linguistic variables) and then aggregated. It is shown that the induced linguistic ordered weighted averaging (ILOWA) operator and linguistic ordered weighted averaging (LOWA) operator are the special cases of the GILOWA operator, induced linguistic ordered weighted geometric (ILOWG) operator and linguistic ordered weighted geometric (LOWG) operator are the special cases of the GILOWG operator, the induced uncertain linguistic ordered weighted averaging (IULOWA) operator and uncertain linguistic ordered weighted averaging (ULOWA) operator are the special cases of the GIULOWA operator, and that the induced uncertain linguistic ordered weighted geometric (IULOWG) operator and uncertain LOWG operator are the special cases of the GILOWG operator.
Application of FMCDM model to selecting the hub location in the marine transportation: A case study in southeastern Asia Hub location selection problems have become one of the most popular and important issues not only in the truck transportation and the air transportation, but also in the marine transportation. The main focus of this paper is on container transshipment hub locations in southeastern Asia. Transshipment is the fastest growing segment of the containerport market, resulting in significant scope to develop new transshipment terminal capacity to cater for future expected traffic flows. A shipping carrier not only calculates transport distances and operation costs, but also evaluates some qualitative conditions for existing hub locations and then selects an optimal container transshipment hub location in the region. In this paper, a fuzzy multiple criteria decision-making (FMCDM) model is proposed for evaluating and selecting the container transshipment hub port. Finally, the utilization of the proposed FMCDM model is demonstrated with a case study of hub locations in southeastern Asia. The results show that the FMCDM model proposed in this paper can be used to explain the evaluation and decision-making procedures of hub location selection well. In addition, the preferences are calculated for existing hub locations and these are then compared with a new proposed container transshipment hub location in the region, in this instance the Port of Shanghai. Furthermore, a sensitivity analysis is performed.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.2
0.018182
0.011111
0.00082
0
0
0
0
0
0
0
0
0
0
An interval type-2 fuzzy LINMAP method with approximate ideal solutions for multiple criteria decision analysis. The purpose of this paper is to develop a linear programming technique for multidimensional analysis of preference (LINMAP) to address multiple criteria decision analysis problems within the interval type-2 fuzzy environment based on interval type-2 trapezoidal fuzzy numbers. Considering the issue of anchor dependency, we use multiple anchor points in the decision-making process and employ approximate positive-ideal and negative-ideal solutions as the points of reference. Selected useful properties of the approximate ideal solutions are also investigated. In contrast to the classical LINMAP methods, this paper directly generates approximate ideal solutions from the characteristics of all alternatives. Next, this work presents the concept of closeness-based indices using Minkowski distances with approximate ideal solutions to develop a new approach for determining measurements of consistency and inconsistency. Under incomplete preference information on paired comparisons of the alternatives, this paper provides a novel method that uses the concept of comprehensive closeness-based indices to measure the poorness of fit and the goodness of fit. By applying the consistency indices and inconsistency indices, this work formulates an optimization problem that can be solved for the optimal weights of the criteria and thus acquires the best compromise alternative. Additionally, this paper explores the problem of supplier selection and conducts a comparative discussion to validate the effectiveness and applicability of the proposed interval type-2 fuzzy LINMAP method with approximate ideal solutions. Furthermore, the proposed method is applied to address a marketplace decision difficulty (MPDD)-prone decision-making problem to provide additional contributions for practical implications.
Type-2 Fuzzy Soft Sets and Their Applications in Decision Making. Molodtsov introduced the theory of soft sets, which can be used as a general mathematical tool for dealing with uncertainty. This paper aims to introduce the concept of the type-2 fuzzy soft set by integrating the type-2 fuzzy set theory and the soft set theory. Some operations on the type-2 fuzzy soft sets are given. Furthermore, we investigate the decision making based on type-2 fuzzy soft sets. By means of level soft sets, we propose an adjustable approach to type-2 fuzzy-soft-set based decision making and give some illustrative examples. Moreover, we also introduce the weighted type-2 fuzzy soft set and examine its application to decision making.
Multi-Criteria And Multi-Stage Facility Location Selection Under Interval Type-2 Fuzzy Environment: A Case Study For A Cement Factory The study proposes a comprehensive and systematic approach for multi-criteria and multi-stage facility location selection problem. To handle with high and more uncertainty in the evaluation and selection processes, the problem is solved by using multi-criteria decision making technique with interval Type-2 fuzzy sets. The study contributes the facility location selection literature by introducing the application of fuzzy TOPSIS method with interval Type-2 fuzzy sets. Finally, the suggested approach is applied to a real life region and site selection problem of a cement factory.
An interval type-2 fuzzy extension of the TOPSIS method using alpha cuts technique for order of preference by similarity to ideal solution (TOPSIS) currently is probably one of most popular method for Multiple Criteria Decision Making (MCDM). The method was primarily developed for dealing with real-valued data. Nevertheless, in practice often it is hard to present precisely exact ratings of alternatives with respect to local criteria and as a result these ratings are presented by as fuzzy values. Many recent papers have been devoted to the fuzzy extension of the TOPSIS method, but only a few works provided the type-2 fuzzy extensions, whereas such extensions seem to be very useful for the solution of many real-world problems, e.g., Multiple Criteria Group Decision Making problem. Since the proposed type-2 fuzzy extensions of the TOPSIS method have some limitations and drawbacks, in this paper we propose an interval type-2 fuzzy extension of the TOPSIS method realized with the use of α-cuts representation of the interval type-2 fuzzy values ( IT 2 FV ). This extension is free of the limitations of the known methods. The proposed method is realized for the cases of perfectly normal and normal IT 2 FV s. The illustrative examples are presented to show the features of the proposed method.
Likelihoods of interval type-2 trapezoidal fuzzy preference relations and their application to multiple criteria decision analysis. Interval type-2 fuzzy sets are useful and valuable for depicting uncertainty and managing imprecision in decision information. In particular, interval type-2 trapezoidal fuzzy numbers, as a special case of interval type-2 fuzzy sets, can efficiently express qualitative evaluations or assessments. In this work, the concept of the likelihoods of interval type-2 trapezoidal fuzzy preference relations based on lower and upper likelihoods is investigated, and the relevant properties are discussed. This paper focuses on the use of likelihoods in addressing multiple criteria decision analysis problems in which the evaluative ratings of the alternatives and the importance weights of the criteria are expressed as interval type-2 trapezoidal fuzzy numbers. A new likelihood-based decision-making method is developed using the useful concepts of likelihood-based performance indices, likelihood-based comprehensive evaluation values, and signed distance-based evaluation values. A simplified version of the proposed method is also provided to adapt the decision-making context in which the importance weights of the criteria take the form of ordinary numbers. The practical effectiveness of the proposed method is validated with four applications, and several comparative analyses are conducted to verify the advantages of the proposed method over other multiple criteria decision-making methods.
An extended VIKOR method based on prospect theory for multiple attribute decision making under interval type-2 fuzzy environment Interval type-2 fuzzy set (IT2FS) offers interesting avenue to handle high order information and uncertainty in decision support system (DSS) when dealing with both extrinsic and intrinsic aspects of uncertainty. Recently, multiple attribute decision making (MADM) problems with interval type-2 fuzzy information have received increasing attentions both from researchers and practitioners. As a result, a number of interval type-2 fuzzy MADM methods have been developed. In this paper, we extend the VIKOR (VlseKriterijumska Optimizacijia I Kompromisno Resenje, in Serbian) method based on the prospect theory to accommodate interval type-2 fuzzy circumstances. First, we propose a new distance measure for IT2FS, which is comes as a sound alternative when being compared with the existing interval type-2 fuzzy distance measures. Then, a decision model integrating VIKOR method and prospect theory is proposed. A case study concerning a high-tech risk evaluation is provided to illustrate the applicability of the proposed method. In addition, a comparative analysis with interval type-2 fuzzy TOPSIS method is also presented.
The sampling method of defuzzification for type-2 fuzzy sets: Experimental evaluation For generalised type-2 fuzzy sets the defuzzification process has historically been slow and inefficient. This has hampered the development of type-2 Fuzzy Inferencing Systems for real applications and therefore no advantage has been taken of the ability of type-2 fuzzy sets to model higher levels of uncertainty. The research reported here provides a novel approach for improving the speed of defuzzification for discretised generalised type-2 fuzzy sets. The traditional type-reduction method requires every embedded type-2 fuzzy set to be processed. The high level of redundancy in the huge number of embedded sets inspired the development of our sampling method which randomly samples the embedded sets and processes only the sample. The paper presents detailed experimental results for defuzzification of constructed sets of known defuzzified value. The sampling defuzzifier is compared on aggregated type-2 fuzzy sets resulting from the inferencing stage of a FIS, in terms of accuracy and speed, with other methods including the exhaustive and techniques based on the @a-planes representation. The results indicate that by taking only a sample of the embedded sets we are able to dramatically reduce the time taken to process a type-2 fuzzy set with very little loss in accuracy.
The algebra of fuzzy truth values The purpose of this paper is to give a straightforward mathematical treatment of algebras of fuzzy truth values for type-2 fuzzy sets.
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K/logN)-r, r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed
Rough clustering
Fast Solution of l1-Norm Minimization Problems When the Solution May Be Sparse The minimum lscr1-norm solution to an underdetermined system of linear equations y=Ax is often, remarkably, also the sparsest solution to that system. This sparsity-seeking property is of interest in signal processing and information transmission. However, general-purpose optimizers are much too slow for lscr1 minimization in many large-scale applications.In this paper, the Homotopy method, origin...
Fuzzy group assessment for facility location decision Facility location decisions are a critical element in strategic planning for multinational enterprises (MNEs). In this study, we propose a group fuzzy assessment method to tackle the facility location decisions for MNEs based on investment environment factors. Via the proposed group assessment model, our result will be more objective and unbiased since it is generated by a group of evaluators.
Data Mining with Graphical Models The explosion of data stored in commercial or administra- tional databases calls for intelligent techniques to discover the patterns hidden in them and thus to exploit all available information. There- fore a new line of research has recently been established, which became known under the names "Data Mining" and "Knowledge Discovery in Databases". In this paper we study a popular technique from its arsenal of methods to do dependency analysis, namely learning inference net- works (also called "graphical models") from data. We review the already well-known probabilistic networks and provide an introduction to the recently developed and closely related possibilistic networks.
Fuzzy OWA model for information security risk management One of the methods for information security risk assessment is the substantiated choice and realization of countermeasures against threats. A situational fuzzy OWA model of a multicriteria decision making problem concerning the choice of countermeasures for reducing information security risks is proposed. The proposed model makes it possible to modify the associated weights of criteria based on the information entropy with respect to the aggregation situation. The advantage of the model is the continuous improvement of the weights of the criteria and the aggregation of experts’ opinions depending on the parameter characterizing the aggregation situation.
1.1
0.1
0.1
0.1
0.05
0.033333
0.007692
0
0
0
0
0
0
0
Statistical inference about the means of fuzzy random variables: Applications to the analysis of fuzzy- and real-valued data The expected value of a fuzzy random variable plays an important role as central summary measure, and for this reason, in the last years valuable statistical inferences about the means of the fuzzy random variables have been developed. Some of the main contributions in this topic are gathered and discussed. Concerning the hypothesis testing, the bootstrap techniques have empirically shown to be efficient and powerful. Algorithms to apply these techniques in practice and some illustrative real-life examples are included. On the other hand, it has been recently shown that the distribution of any real-valued random variable can be represented by means of a fuzzy set. The characterizing fuzzy sets correspond to the expected value of a certain fuzzy random variable based on a family of fuzzy-valued transformations of the original real-valued ones. They can be used for descriptive/exploratory or inferential purposes. This fact adds an extra-value to the fuzzy expected value and the preceding statistical procedures, that can be used in statistics about real distributions.
Simulation of fuzzy random variables This work deals with the simulation of fuzzy random variables, which can be used to model various realistic situations, where uncertainty is not only present in form of randomness but also in form of imprecision, described by means of fuzzy sets. Utilizing the common arithmetics in the space of all fuzzy sets only induces a conical structure. As a consequence, it is difficult to directly apply the usual simulation techniques for functional data. In order to overcome this difficulty two different approaches based on the concept of support functions are presented. The first one makes use of techniques for simulating Hilbert space-valued random elements and afterwards projects on the cone of all fuzzy sets. It is shown by empirical results that the practicability of this approach is limited. The second approach imitates the representation of every element of a separable Hilbert space in terms of an orthonormal basis directly on the space of fuzzy sets. In this way, a new approximation of fuzzy sets useful to approximate and simulate fuzzy random variables is developed. This second approach is adequate to model various realistic situations.
A generalized real-valued measure of the inequality associated with a fuzzy random variable Fuzzy random variables have been introduced by Puri and Ralescu as an extension of random sets. In this paper, we first introduce a real-valued generalized measure of the “relative variation” (or inequality) associated with a fuzzy random variable. This measure is inspired in Csiszár's f-divergence, and extends to fuzzy random variables many well-known inequality indices. To guarantee certain relevant properties of this measure, we have to distinguish two main families of measures which will be characterized. Then, the fundamental properties are derived, and an outstanding measure in each family is separately examined on the basis of an additive decomposition property and an additive decomposability one. Finally, two examples illustrate the application of the study in this paper.
Multi-sample test-based clustering for fuzzy random variables A clustering method to group independent fuzzy random variables observed on a sample by focusing on their expected values is developed. The procedure is iterative and based on the p-value of a multi-sample bootstrap test. Thus, it simultaneously takes into account fuzziness and stochastic variability. Moreover, an objective stopping criterion leading to statistically equal groups different from each other is provided. Some simulations to show the performance of this inferential approach are included. The results are illustrated by means of a case study.
Bootstrap techniques and fuzzy random variables: Synergy in hypothesis testing with fuzzy data In previous studies we have stated that the well-known bootstrap techniques are a valuable tool in testing statistical hypotheses about the means of fuzzy random variables, when these variables are supposed to take on a finite number of different values and these values being fuzzy subsets of the one-dimensional Euclidean space. In this paper we show that the one-sample method of testing about the mean of a fuzzy random variable can be extended to general ones (more precisely, to those whose range is not necessarily finite and whose values are fuzzy subsets of finite-dimensional Euclidean space). This extension is immediately developed by combining some tools in the literature, namely, bootstrap techniques on Banach spaces, a metric between fuzzy sets based on the support function, and an embedding of the space of fuzzy random variables into a Banach space which is based on the support function.
Estimating the expected value of fuzzy random variables in the stratified random sampling from finite populations In this paper, we consider the problem of estimating the expected value of a fuzzy-valued random element in the stratified random sampling from finite populations. To this purpose, we quantify the associated sampling error by means of a generalized measure introduced in a previous paper. We also suggest a way to compare different variates for stratification, as well as to test the adequacy of a certain one.
A behavioural model for vague probability assessments I present an hierarchical uncertainty model that is able to represent vague probability assessments, and to make inferences based on them. This model can be given an interpretation in terms of the behaviour of a modeller in the face of uncertainty, and is based on Walley's theory of imprecise probabilities. It is formally closely related to Zadeh's fuzzy probabilities, but it has a different interpretation, and a different calculus. Through rationality (coherence) arguments, the hierarchical model is shown to lead to an imprecise first-order uncertainty model that can be used in decision making, and as a prior in statistical reasoning.
A Group Decision Support Approach to Evaluate Experts for R&D Project Selection In R&D project selection, experts (or external reviewers) always play a very important role because their opinions will have great influence on the outcome of the project selection. It is also undoubted that experts with high-expertise level will make useful and professional judgments on the projects to be selected. So, how to measure the expertise level of experts and select the most appropriate ...
Fuzzy Sets
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with know locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing.
Aggregation with generalized mixture operators using weighting functions This paper regards weighted aggregation operators in multiple attribute decision making and its main goal is to investigate ways in which weights can depend on the satisfaction degrees of the various attributes (criteria). We propose and discuss two types of weighting functions that penalize poorly satisfied attributes and reward well-satisfied attributes. We discuss in detail the characteristics and properties of both functions. Moreover, we present an illustrative example to clarify the use and behaviour of such weighting functions, comparing the results with those of standard weighted averaging operators.
Fuzzy Model Identification and Self-Learning for Dynamic Systems The algorithms of fuzzy model identification and self-learning for multi-input/multi-output dynamic systems are proposed. The required computer capacity and time for implementing the proposed algorithms and related resulting models are-significantly reduced by introducing the concept of the "referential fuzzy sets." Two numerical examples are given to show that the proposed algorithms can provide the fuzzy models with satisfactory accuracy.
Analysis of variance for fuzzy data The method for imposing imprecise (fuzzy) data upon the traditional ANOVA model is proposed in this article. We transact the h-level sets of fuzzy data for the sake of invoking traditional method of ANOVA for real-valued data. We propose the decision rules that are used to accept or reject the null and alternative hypotheses with the notions of pessimistic degree and optimistic degree by solving optimization problems. Finally, we provide a computational procedure and an example to clarify the discussions in this article.
Split Bregman iterative algorithm for sparse reconstruction of electrical impedance tomography In this paper, we present an evaluation of the use of split Bregman iterative algorithm for the L"1-norm regularized inverse problem of electrical impedance tomography. Simulations are performed to validate that our algorithm is competitive in terms of the imaging quality and computational speed in comparison with several state-of-the-art algorithms. Results also indicate that in contrast to the conventional L"2-norm regularization method and total variation (TV) regularization method, the L"1-norm regularization method can sharpen the edges and is more robust against data noises.
1.035257
0.044804
0.039791
0.022222
0.015142
0.004111
0.000105
0.000012
0.000001
0
0
0
0
0
Disambiguation by Association as a Practical Method: Experiments and Findings ... We have replicated two well known methods (of word sense disambiguation) due to Lesk (1986) and Ide and Veronis (1990), and have conducted trials using both methods on a corpus of 100 sentences. We also carried out experimentes to determine whether the use of syntactic tagging would improve results. There are three principal findings of this work. Firstly, syntactic tagging improves the performance of all the disambiguation algorithms. Secondly, the Ide and Veronis method of depth 2...
Using corpus statistics and WordNet relations for sense identification Corpus-based approaches to word sense identification have flexibility and generality but suffer from a knowledge acquisition bottleneck. We show how knowledge-based techniques can be used to open the bottleneck by automatically locating training corpora. We describe a statistical classifier that combines topical context with local cues to identify a word sense. The classifier is used to disambiguate a noun, a verb, and an adjective. A knowledge base in the form of WordNet's lexical relations is used to automatically locate training examples in a general text corpus. Test results are compared with those from manually tagged training examples.
A method for disambiguating word senses in a large corpus Word sense disambiguation has been recognized as a major problem in natural language processing research for over forty years. Both quantitive and qualitative methods have been tried, but much of this work has been stymied by difficulties in acquiring appropriate lexical resources. The availability of this testing and training material has enabled us to develop quantitative disambiguation methods that achieve 92% accuracy in discriminating between two very distinct senses of a noun. In the training phase, we collect a number of instances of each sense of the polysemous noun. Then in the testing phase, we are given a new instance of the noun, and are asked to assign the instance to one of the senses. We attempt to answer this question by comparing the context of the unknown instance with contexts of known instances using a Bayesian argument that has been applied successfully in related tasks such as author identification and information retrieval. The proposed method is probably most appropriate for those aspects of sense disambiguation that are closest to the information retrieval task. In particular, the proposed method was designed to disambiguate senses that are usually associated with different topics.
Estimating upper and lower bounds on the performance of word-sense disambiguation programs We have recently reported on two new word-sense disambiguation systems, one trained on bilingual material (the Canadian Hansards) and the other trained on monolingual material (Roget's Thesaurus and Grolier's Encyclopedia). After using both the monolingual and bilingual classifiers for a few months, we have convinced ourselves that the performance is remarkably good. Nevertheless, we would really like to be able to make a stronger statement, and therefore, we decided to try to develop some more objective evaluation measures. Although there has been a fair amount of literature on sense-disambiguation, the literature does not offer much guidance in how we might establish the success or failure of a proposed solution such as the two systems mentioned in the previous paragraph. Many papers avoid quantitative evaluations altogether, because it is so difficult to come up with credible estimates of performance.This paper will attempt to establish upper and lower bounds on the level of performance that can be expected in an evaluation. An estimate of the lower bound of 75% (averaged over ambiguous types) is obtained by measuring the performance produced by a baseline system that ignores context and simply assigns the most likely sense in all cases. An estimate of the upper bound is obtained by assuming that our ability to measure performance is largely limited by our ability obtain reliable judgments from human informants. Not surprisingly, the upper bound is very dependent on the instructions given to the judges. Jorgensen, for example, suspected that lexicographers tend to depend too much on judgments by a single informant and found considerable variation over judgments (only 68% agreement), as she had suspected. In our own experiments, we have set out to find word-sense disambiguation tasks where the judges can agree often enough so that we could show that they were outperforming the baseline system. Under quite different conditions, we have found 96.8% agreement over judges.
Engineering “word experts” for word disambiguation
Centering: a framework for modeling the local coherence of discourse This paper concerns relationships among focus of attention, choice of referring expression, and perceived coherence of utterances within a discourse segment. It presents a framework and initial theory of centering intended to model the local component of attentional state. The paper examines interactions between local coherence and choice of referring expressions; it argues that differences in coherence correspond in part to the inference demands made by different types of referring expressions, given a particular attentional state. It demonstrates that the attentional state properties modeled by centering can account for these differences.
The algebra of fuzzy truth values The purpose of this paper is to give a straightforward mathematical treatment of algebras of fuzzy truth values for type-2 fuzzy sets.
Construction of Interval-Valued Fuzzy Relations With Application to the Generation of Fuzzy Edge Images In this paper, we present a new construction method for interval-valued fuzzy relations (interval-valued fuzzy images) from fuzzy relations (fuzzy images) by vicinity. This construction method is based on the concepts of triangular norm ($t$-norm) and triangular conorm ( $t$-conorm). We analyze the effect of using different $t$-norms and $t$ -conorms. Furthermore, we examine the influence of different sizes of the submatrix around each element of a fuzzy relation on the interval-valued fuzzy relation. Finally, we apply our construction method to image processing, and we compare the results of our approach with those obtained by means of other, i.e., fuzzy and nonfuzzy, techniques.
MULTILEVEL QUADRATURE FOR ELLIPTIC PARAMETRIC PARTIAL DIFFERENTIAL EQUATIONS IN CASE OF POLYGONAL APPROXIMATIONS OF CURVED DOMAINS Multilevel quadrature methods for parametric operator equations such as the multilevel (quasi-) Monte Carlo method resemble a sparse tensor product approximation between the spatial variable and the parameter. We employ this fact to reverse the multilevel quadrature method by applying differences of quadrature rules to finite element discretizations of increasing resolution. Besides being algorithmically more efficient if the underlying quadrature rules are nested, this way of performing the sparse tensor product approximation enables the easy use of nonnested and even adaptively refined finite element meshes. We moreover provide a rigorous error and regularity analysis addressing the variational crimes of using polygonal approximations of curved domains and numerical quadrature of the bilinear form. Our results facilitate the construction of efficient multilevel quadrature methods based on deterministic high order quadrature rules for the stochastic parameter. Numerical results in three spatial dimensions are provided to illustrate the approach.
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.
Is Gauss Quadrature Better than Clenshaw-Curtis? We compare the convergence behavior of Gauss quadrature with that of its younger brother, Clenshaw-Curtis. Seven-line MATLAB codes are presented that implement both methods, and experiments show that the supposed factor-of-2 advantage of Gauss quadrature is rarely realized. Theorems are given to explain this effect. First, following O'Hara and Smith in the 1960s, the phenomenon is explained as a consequence of aliasing of coefficients in Chebyshev expansions. Then another explanation is offered based on the interpretation of a quadrature formula as a rational approximation of $\log((z+1)/(z-1))$ in the complex plane. Gauss quadrature corresponds to Padé approximation at $z=\infty$. Clenshaw-Curtis quadrature corresponds to an approximation whose order of accuracy at $z=\infty$ is only half as high, but which is nevertheless equally accurate near $[-1,1]$.
Induced uncertain linguistic OWA operators applied to group decision making The ordered weighted averaging (OWA) operator was developed by Yager [IEEE Trans. Syst., Man, Cybernet. 18 (1998) 183]. Later, Yager and Filev [IEEE Trans. Syst., Man, Cybernet.--Part B 29 (1999) 141] introduced a more general class of OWA operators called the induced ordered weighted averaging (IOWA) operators, which take as their argument pairs, called OWA pairs, in which one component is used to induce an ordering over the second components which are exact numerical values and then aggregated. The aim of this paper is to develop some induced uncertain linguistic OWA (IULOWA) operators, in which the second components are uncertain linguistic variables. Some desirable properties of the IULOWA operators are studied, and then, the IULOWA operators are applied to group decision making with uncertain linguistic information.
Heden's bound on maximal partial spreads We prove Heden's result that the deficiency δ of a maximal partial spread in PG(3, q ) is greater than 1 + ½ (1+√5)√ q unless δ−1 is a multiple of p , where q=p n . When q is odd and not a square, we are able to improve this lower bound to roughly √3 q .
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.21393
0.21393
0.038395
0.026893
0.000711
0.000267
0
0
0
0
0
0
0
0
The CP-QAE-I: A video dataset for exploring the effect of personality and culture on perceived quality and affect in multimedia Perception of quality and affect are subjective, driven by a complex interplay between system and human factors. Is it, however, possible to model these factors to predict subjective perception? To pursue this question, broader collaboration is needed to sample all aspects of personality, culture, and other human factors. Thus, an appropriate dataset is needed to integrate such efforts. Here, the CP-QAE-I is proposed. This is a video dataset containing 144 video sequences based on 12 short movie clips. These vary by: frame rate; frame dimension; bit-rate; and affect. An evaluation by 76 participants drawn from the United Kingdom, Singapore, India, and China suggests adequate distinction between the video sequences in terms of perceived quality as well as positive and negative affect. Nationality also emerged as a significant predictor, supporting the rationale for further study. By sharing the dataset, this paper aims to promote work modeling human factors in multimedia perception.
Modelling Human Factors in Perceptual Multimedia Quality: On The Role of Personality and Culture Perception of multimedia quality is shaped by a rich interplay between system, context and human factors. While system and context factors are widely researched, few studies consider human factors as sources of systematic variance. This paper presents an analysis on the influence of personality and cultural traits on the perception of multimedia quality. A set of 144 video sequences (from 12 short movie excerpts) were rated by 114 participants from a cross-cultural population, producing 1232 ratings. On this data, three models are compared: a baseline model that only considers system factors; an extended model that includes personality and culture as human factors; and an optimistic model in which each participant is modelled as a random effect. An analysis shows that personality and cultural traits represent 9.3\\% of the variance attributable to human factors while human factors overall predict an equal or higher proportion of variance compared to system factors. In addition, the quality-enjoyment correlation varied across the excerpts. This suggests that human factors play an important role in perceptual multimedia quality, but further research to explore moderation effects and a broader range of human factors is warranted.
A study on the effects of quality of service parameters on perceived video quality In this paper a video database, ReTRiEVED, to be used in evaluating the performances of video quality metrics is presented. The database contains 184 distorted videos obtained from eight videos of different content. Packet loss rate, jitter, delay, and throughput have been considered as possible distortions resulting from video transmission. Video sequences, collected subjective scores, and results of the performed analysis are made publicly available for the research community, for designing, testing and comparing objective video quality metrics. The analysis of the results shows that packet loss rate, throughput/bandwidth, and jitter have significant effect on perceived quality, while an initial delay does not significantly affect the perceived quality.
Selecting scenes for 2D and 3D subjective video quality tests. This paper presents recommended techniques for choosing video sequences for subjective experiments. Subjective video quality assessment is a well-understood field, yet scene selection is often driven by convenience or content availability. Three-dimensional testing is a newer field that requires new considerations for scene selection. The impact of experiment design on best practices for scene selection will also be considered. A semi-automatic selection process for content sets for subjective experiments will be proposed.
Factors influencing quality of experience of commonly used mobile applications. Increasingly, we use mobile applications and services in our daily life activities, to support our needs for information, communication or leisure. However, user acceptance of a mobile application depends on at least two conditions: the application&#39;s perceived experience, and the appropriateness of the application to the user&#39;s context and needs. However, we have a weak understanding of a mobile u...
Packet Reordering Metrics
The variety generated by the truth value algebra of type-2 fuzzy sets This paper addresses some questions about the variety generated by the algebra of truth values of type-2 fuzzy sets. Its principal result is that this variety is generated by a finite algebra, and in particular is locally finite. This provides an algorithm for determining when an equation holds in this variety. It also sheds light on the question of determining an equational axiomatization of this variety, although this problem remains open.
MIMO technologies in 3GPP LTE and LTE-advanced 3rd Generation Partnership Project (3GPP) has recently completed the specification of the Long Term Evolution (LTE) standard. Majority of the world's operators and vendors are already committed to LTE deployments and developments, making LTE the market leader in the upcoming evolution to 4G wireless communication systems. Multiple input multiple output (MIMO) technologies introduced in LTE such as spatial multiplexing, transmit diversity, and beamforming are key components for providing higher peak rate at a better system efficiency, which are essential for supporting future broadband data service over wireless links. Further extension of LTE MIMO technologies is being studied under the 3GPP study item "LTE-Advanced" to meet the requirement of IMT-Advanced set by International Telecommunication Union Radiocommunication Sector (ITU-R). In this paper, we introduce various MIMO technologies employed in LTE and provide a brief overview on the MIMO technologies currently discussed in the LTE-Advanced forum.
Galerkin Finite Element Approximations of Stochastic Elliptic Partial Differential Equations We describe and analyze two numerical methods for a linear elliptic problem with stochastic coefficients and homogeneous Dirichlet boundary conditions. Here the aim of the computations is to approximate statistical moments of the solution, and, in particular, we give a priori error estimates for the computation of the expected value of the solution. The first method generates independent identically distributed approximations of the solution by sampling the coefficients of the equation and using a standard Galerkin finite element variational formulation. The Monte Carlo method then uses these approximations to compute corresponding sample averages. The second method is based on a finite dimensional approximation of the stochastic coefficients, turning the original stochastic problem into a deterministic parametric elliptic problem. A Galerkin finite element method, of either the h- or p-version, then approximates the corresponding deterministic solution, yielding approximations of the desired statistics. We present a priori error estimates and include a comparison of the computational work required by each numerical approximation to achieve a given accuracy. This comparison suggests intuitive conditions for an optimal selection of the numerical approximation.
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.
Estimation of (near) low-rank matrices with noise and high-dimensional scaling We study an instance of high-dimensional inference in which the goal is to estimate a matrix circle minus* is an element of R-m1xm2 on the basis of N noisy observations. The unknown matrix circle minus* is assumed to be either exactly low rank, or "near" low-rank, meaning that it can be well-approximated by a matrix with low rank. We consider a standard M-estimator based on regularization by the nuclear or trace norm over matrices, and analyze its performance under high-dimensional scaling. We define the notion of restricted strong convexity (RSC) for the loss function, and use it to derive nonasymptotic bounds on the Frobenius norm error that hold for a general class of noisy observation models, and apply to both exactly low-rank and approximately low rank matrices. We then illustrate consequences of this general theory for a number of specific matrix models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes and recovery of low-rank matrices from random projections. These results involve nonasymptotic random matrix theory to establish that the RSC condition holds, and to determine an appropriate choice of regularization parameter. Simulation results show excellent agreement with the high-dimensional scaling of the error predicted by our theory.
Design of interval type-2 fuzzy models through optimal granularity allocation In this paper, we offer a new design methodology of type-2 fuzzy models whose intent is to effectively exploit the uncertainty of non-numeric membership functions. A new performance index, which guides the development of the fuzzy model, is used to navigate the construction of the fuzzy model. The underlying idea is that an optimal granularity allocation throughout the membership functions used in the fuzzy model leads to the best design. In contrast to the commonly utilized criterion where one strives for the highest accuracy of the model, the proposed index is formed in such a way so that the type-2 fuzzy model produced intervals, which ''cover'' the experimental data and at the same time are made as narrow (viz. specific) as possible. Genetic algorithm is proposed to automate the design process and further improve the results by carefully exploiting the search space. Experimental results show the efficiency of the proposed design methodology.
A model to perform knowledge-based temporal abstraction over multiple signals In this paper we propose the Multivariable Fuzzy Temporal Profile model (MFTP), which enables the projection of expert knowledge on a physical system over a computable description. This description may be used to perform automatic abstraction on a set of parameters that represent the temporal evolution of the system. This model is based on the constraint satisfaction problem (CSP)formalism, which enables an explicit representation of the knowledge, and on fuzzy set theory, from which it inherits the ability to model the imprecision and uncertainty that are characteristic of human knowledge vagueness. We also present an application of the MFTP model to the recognition of landmarks in mobile robotics, specifically to the detection of doors on ultrasound sensor signals from a Nomad 200 robot.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.11
0.1
0.073333
0.001026
0.000317
0.000058
0
0
0
0
0
0
0
0
An efficient surrogate-based method for computing rare failure probability In this paper, we present an efficient numerical method for evaluating rare failure probability. The method is based on a recently developed surrogate-based method from Li and Xiu [J. Li, D. Xiu, Evaluation of failure probability via surrogate models, J. Comput. Phys. 229 (2010) 8966-8980] for failure probability computation. The method by Li and Xiu is of hybrid nature, in the sense that samples of both the surrogate model and the true physical model are used, and its efficiency gain relies on using only very few samples of the true model. Here we extend the capability of the method to rare probability computation by using the idea of importance sampling (IS). In particular, we employ cross-entropy (CE) method, which is an effective method to determine the biasing distribution in IS. We demonstrate that, by combining with the CE method, a surrogate-based IS algorithm can be constructed and is highly efficient for rare failure probability computation-it incurs much reduced simulation efforts compared to the traditional CE-IS method. In many cases, the new method is capable of capturing failure probability as small as 10^-^1^2~10^-^6 with only several hundreds samples.
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
Evaluation of failure probability via surrogate models Evaluation of failure probability of a given system requires sampling of the system response and can be computationally expensive. Therefore it is desirable to construct an accurate surrogate model for the system response and subsequently to sample the surrogate model. In this paper we discuss the properties of this approach. We demonstrate that the straightforward sampling of a surrogate model can lead to erroneous results, no matter how accurate the surrogate model is. We then propose a hybrid approach by sampling both the surrogate model in a ''large'' portion of the probability space and the original system in a ''small'' portion. The resulting algorithm is significantly more efficient than the traditional sampling method, and is more accurate and robust than the straightforward surrogate model approach. Rigorous convergence proof is established for the hybrid approach, and practical implementation is discussed. Numerical examples are provided to verify the theoretical findings and demonstrate the efficiency gain of the approach.
A new algorithm for high-dimensional uncertainty quantification based on dimension-adaptive sparse grid approximation and reduced basis methods In this work we develop an adaptive and reduced computational algorithm based on dimension-adaptive sparse grid approximation and reduced basis methods for solving high-dimensional uncertainty quantification (UQ) problems. In order to tackle the computational challenge of “curse of dimensionality” commonly faced by these problems, we employ a dimension-adaptive tensor-product algorithm [16] and propose a verified version to enable effective removal of the stagnation phenomenon besides automatically detecting the importance and interaction of different dimensions. To reduce the heavy computational cost of UQ problems modelled by partial differential equations (PDE), we adopt a weighted reduced basis method [7] and develop an adaptive greedy algorithm in combination with the previous verified algorithm for efficient construction of an accurate reduced basis approximation. The efficiency and accuracy of the proposed algorithm are demonstrated by several numerical experiments.
Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions For $d$-dimensional tensors with possibly large $d3$, an hierarchical data structure, called the Tree-Tucker format, is presented as an alternative to the canonical decomposition. It has asymptotically the same (and often even smaller) number of representation parameters and viable stability properties. The approach involves a recursive construction described by a tree with the leafs corresponding to the Tucker decompositions of three-dimensional tensors, and is based on a sequence of SVDs for the recursively obtained unfolding matrices and on the auxiliary dimensions added to the initial “spatial” dimensions. It is shown how this format can be applied to the problem of multidimensional convolution. Convincing numerical examples are given.
A Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data This work proposes and analyzes a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems as in the Monte Carlo method. If the number of random variables needed to describe the input data is moderately large, full tensor product spaces are computationally expensive to use due to the curse of dimensionality. In this case the sparse grid approach is still expected to be competitive with the classical Monte Carlo method. Therefore, it is of major practical relevance to understand in which situations the sparse grid stochastic collocation method is more efficient than Monte Carlo. This work provides error estimates for the fully discrete solution using $L^q$ norms and analyzes the computational efficiency of the proposed method. In particular, it demonstrates algebraic convergence with respect to the total number of collocation points and quantifies the effect of the dimension of the problem (number of input random variables) in the final estimates. The derived estimates are then used to compare the method with Monte Carlo, indicating for which problems the former is more efficient than the latter. Computational evidence complements the present theory and shows the effectiveness of the sparse grid stochastic collocation method compared to full tensor and Monte Carlo approaches.
Process and environmental variation impacts on ASIC timing With each semiconductor process node, the impacts on performance of environmental and semiconductor process variations become a larger portion of the cycle time of the product. Simple guard-banding for these effects leads to increased product development times and uncompetitive products. In addition, traditional static timing methodologies are unable to cope with the large number of permutations of process, voltage, and temperature corners created by these independent sources of variation. In this paper we will discuss the sources of variation; by introducing the concepts of systematic inter-die variation, systematic intra-die variation and intra-die random variation. We will show that by treating these forms of variations differently, we can achieve design closure with less guard-banding than traditional methods.
A comprehensive theory of trichotomous evaluative linguistic expressions In this paper, a logical theory of the, so-called, trichotomous evaluative linguistic expressions (TEv-expressions) is presented. These are frequent expressions of natural language, such as ''small, very small, roughly medium, extremely big'', etc. The theory is developed using the formal system of higher-order fuzzy logic, namely the fuzzy type theory (generalization of classical type theory). First, we discuss informally what are properties of the meaning of TEv-expressions. Then we construct step by step axioms of a formal logical theory T^E^v of TEv-expressions and prove various properties of T^E^v. All the proofs are syntactical and so, our theory is very general. We also outline construction of a canonical model of T^E^v. The main elegancy of our theory consists in the fact that semantics of all kinds of evaluative expressions is modeled in a unified way. We also prove theorems demonstrating that essential properties of the vagueness phenomenon can be captured within our theory.
Monochromatic and Heterochromatic Subgraphs in Edge-Colored Graphs - A Survey Nowadays the term monochromatic and heterochromatic (or rainbow, multicolored) subgraphs of an edge-colored graph appeared frequently in literature, and many results on this topic have been obtained. In this paper, we survey results on this subject. We classify the results into the following categories: vertex-partitions by monochromatic subgraphs, such as cycles, paths, trees; vertex partition by some kinds of heterochromatic subgraphs; the computational complexity of these partition problems; some kinds of large monochromatic and heterochromatic subgraphs. We have to point out that there are a lot of results on Ramsey type problem of monochromatic and heterochromatic subgraphs. However, it is not our purpose to include them in this survey because this is slightly different from our topics and also contains too large amount of results to deal with together. There are also some interesting results on vertex-colored graphs, but we do not include them, either.
Impact of interconnect variations on the clock skew of a gigahertz microprocessor Due to the large die sizes and tight relative clock skew margins, the impact of interconnect manufacturing variations on the clock skew in today's gigahertz microprocessors can no longer be ignored. Unlike manufacturing variations in the devices, the impact of the interconnect manufacturing variations on IC timing performance cannot be captured by worst/best case corner point methods. Thus it is difficult to estimate the clock skew variability due to interconnect variations. In this paper we analyze the timing impact of several key statistically independent interconnect variations in a context-dependent manner by applying a previously reported interconnect variational order-reduction technique. The results show that the interconnect variations can cause up to 25% clock skew variability in a modern microprocessor design.
Statistical design and optimization of SRAM cell for yield enhancement We have analyzed and modeled the failure probabilities of SRAM cells due to process parameter variations. A method to predict the yield of a memory chip based on the cell failure probability is proposed. The developed method is used in an early stage of a design cycle to minimize memory failure probability by statistically sizing of SRAM cell.
A new hybrid artificial neural networks and fuzzy regression model for time series forecasting Quantitative methods have nowadays become very important tools for forecasting purposes in financial markets as for improved decisions and investments. Forecasting accuracy is one of the most important factors involved in selecting a forecasting method; hence, never has research directed at improving upon the effectiveness of time series models stopped. Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of forecasting problems with a high degree of accuracy. However, ANNs need a large amount of historical data in order to yield accurate results. In a real world situation and in financial markets specifically, the environment is full of uncertainties and changes occur rapidly; thus, future situations must be usually forecasted using the scant data made available over a short span of time. Therefore, forecasting in these situations requires methods that work efficiently with incomplete data. Although fuzzy forecasting methods are suitable for incomplete data situations, their performance is not always satisfactory. In this paper, based on the basic concepts of ANNs and fuzzy regression models, a new hybrid method is proposed that yields more accurate results with incomplete data sets. In our proposed model, the advantages of ANNs and fuzzy regression are combined to overcome the limitations in both ANNs and fuzzy regression models. The empirical results of financial market forecasting indicate that the proposed model can be an effective way of improving forecasting accuracy.
Group decision-making model using fuzzy multiple attributes analysis for the evaluation of advanced manufacturing technology Selection of advanced manufacturing technology is important for improving manufacturing system competitiveness. This study builds a group decision-making model using fuzzy multiple attributes analysis to evaluate the suitability of manufacturing technology. Since numerous attributes have been considered in evaluating the manufacturing technology suitability, most information available in this stage is subjective and imprecise, and fuzzy sets theory provides a mathematical framework for modeling imprecision and vagueness. The proposed approach involved developing a fusion method of fuzzy information, which was assessed using both linguistic and numerical scales. In addition, an interactive decision analysis is developed to make a consistent decision. When evaluating the suitability of manufacturing technology, it may be necessary to improve upon the technology, and naturally advanced manufacturing technology is seen as the best direction for improvement. The flexible manufacturing system adopted in the Taiwanese bicycle industry is used in this study to illustrate the computational process of the proposed method. The results of this study are more objective and unbiased, owing to being generated by a group of decision-makers.
Performance and Quality Evaluation of a Personalized Route Planning System Advanced personalization of database applications is a big challenge, in particular for distributed mo- bile environments. We present several new results from a prototype of a route planning system. We demonstrate how to combine qualitative and quantitative preferences gained from situational aspects and from personal user preferences. For performance studies we a nalyze the runtime efficiency of the SR-Combine algorithm used to evaluate top-k queries. By determining the cost-ratio of random to sorted accesses SR-Combine can automati- cally tune its performance within the given system architecture. Top-k queries are generated by mapping linguis- tic variables to numerical weightings. Moreover, we analyze the quality of the query results by several test se- ries, systematically varying the mappings of the linguistic variables. We report interesting insights into this rather under-researched important topic. More investigations, incorporating also cognitive issues, need to be conducted in the future.
1.1
0.1
0.04
0.033333
0.01
0.001087
0
0
0
0
0
0
0
0
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Queuing based optimal scheduling mechanism for QoE provisioning in cognitive radio relaying network In cognitive radio network (CRN), secondary users (SU) can share the licensed spectrum with the primary users (PU). Compared with the traditional network, spectrum utilization in CRN will be greatly improved. In order to ensure the performance of SUs as well as PU, wireless relaying can be employed to improve the system capacity. Meanwhile, quality-of-experience (QoE) should be considered and provisioned in the relay scheduling scheme to ensure user experience and comprehensive network performance. In this paper, we studied a QoE provisioning mechanism for a queuing based optimal relay scheduling problem in CRN. We designed a QoE provisioning scheme with multiple optimized goals about higher capacity and lower packet loss probability. The simulation results showed that our mechanism could get a much better performance on packet loss with suboptimum system capacity. And it indicated that our mechanism could guarantee a better user experience through the specific QoS-QoE mapping models. So our mechanism can improve the network performance and user experience comprehensively.
Mobile quality of experience: Recent advances and challenges Quality of Experience (QoE) is important from both a user perspective, since it assesses the quality a user actually experiences, and a network perspective, since it is important for a provider to dimension its network to support the necessary QoE. This paper presents some recent advances on the modeling and measurement of QoE with an emphasis on mobile networks. It also identifies key challenges for mobile QoE.
Personalized user engagement modeling for mobile videos. The ever-increasing mobile video services and users’ demand for better video quality have boosted research into the video Quality-of-Experience. Recently, the concept of Quality-of-Experience has evolved to Quality-of-Engagement, a more actionable metric to evaluate users’ engagement to the video services and directly relate to the service providers’ revenue model. Existing works on user engagement mostly adopt uniform models to quantify the engagement level of all users, overlooking the essential distinction of individual users. In this paper, we first conduct a large-scale measurement study on a real-world data set to demonstrate the dramatic discrepancy in user engagement, which implies that a uniform model is not expressive enough to characterize the distinctive engagement pattern of each user. To address this problem, we propose PE, a personalized user engagement model for mobile videos, which, for the first time, addresses the user diversity in the engagement modeling. Evaluation results on a real-world data set show that our system significantly outperforms the uniform engagement models, with a 19.14% performance gain.
Linking users' subjective QoE evaluation to signal strength in an IEEE 802.11b/g wireless LAN environment Although the literature on Quality of Experience (QoE) has boomed over the last few years, only a limited number of studies have focused on the relation between objective technical parameters and subjective user-centric indicators of QoE. Building on an overview of the related literature, this paper introduces the use of a software monitoring tool as part of an interdisciplinary approach to QoE measurement. In the presented study, a panel of test users evaluated a mobile web-browsing application (i.e., Wapedia) on a PDA in an IEEE 802.11b/g Wireless LAN environment by rating a number of key QoE dimensions on the device immediately after usage. This subjective evaluation was linked to the signal strength, monitored during PDA usage at four different locations in the test environment. The aim of this study is to assess and model the relation between the subjective evaluation of QoE and the (objective) signal strength in order to achieve future QoE optimization.
Advanced downlink LTE radio resource management for HTTP-streaming Video traffic contributes to the majority of data packets transported over cellular wireless. Future broadband wireless access networks based on 3GPP's Long Term Evolution offer mechanisms for optimized transmission with high data rates and low delay. However, especially when packets are transmitted in the LTE downlink and if services are run over-the-top (OTT), optimization of radio resources in a multi-user environment for video services becomes infeasible. The current market trend is moving to OTT solutions, also for video transmission, where an emerging standard based on HTTP streaming - DASH - is expected to have a huge success in the upcoming years. The solution presented in this paper consists of a novel technique, which combines LTE features with knowledge on DASH sessions for optimization of the wireless resources. The combined optimization yields an improved transmission of videos over cellular wireless systems which are based on LTE and LTE-Advanced.
QoE-based Cross-Layer Optimization for video delivery in Long Term Evolution mobile networks.
Objective Video Quality Assessment Methods: A Classification, Review, and Performance Comparison With the increasing demand for video-based applications, the reliable prediction of video quality has increased in importance. Numerous video quality assessment methods and metrics have been proposed over the past years with varying computational complexity and accuracy. In this paper, we introduce a classification scheme for full-reference and reduced-reference media-layer objective video quality assessment methods. Our classification scheme first classifies a method according to whether natural visual characteristics or perceptual (human visual system) characteristics are considered. We further subclassify natural visual characteristics methods into methods based on natural visual statistics or natural visual features. We subclassify perceptual characteristics methods into frequency or pixel-domain methods. According to our classification scheme, we comprehensively review and compare the media-layer objective video quality models for both standard resolution and high definition video. We find that the natural visual statistics based MultiScale-Structural SIMilarity index (MS-SSIM), the natural visual feature based Video Quality Metric (VQM), and the perceptual spatio-temporal frequency-domain based MOtion-based Video Integrity Evaluation (MOVIE) index give the best performance for the LIVE Video Quality Database.
Augmented Vision And Quality Of Experience Assessment: Towards A Unified Evaluation Framework New display modalities in forthcoming media consumption scenarios require a realignment of currently employed Quality of Experience evaluation frameworks for these novel settings. We consider commercially available optical see-through devices, typically employed by operators in augmented vision or augmented reality scenarios. Based on current multimedia evaluation frameworks, we extrapolate onto the additional environmental challenges provided by the overlay of media content with real-world backgrounds. We derive an overall framework of configurations and metrics that should be part of subjective quality assessment studies and be incorporated into future databases to provide a high quality ground truth foundation for long-term applicability. We present an exemplary experimental setup of a pilot study with related components currently in use to perform human subject experiments for this domain.
Resilient Peer-to-Peer Streaming We consider the problem of distributing "live" streaming media content to a potentially large and highly dynamic population of hosts. Peer-to-peer content distribution is attractive in this setting because the bandwidth available to serve content scales with demand. A key challenge, however, is making content distribution robust to peer transience. Our approach to providing robustness is to introduce redundancy, both in network paths and in data. We use multiple, diverse distribution trees to provide redundancy in network paths and multiple description coding (MDC) to provide redundancy in data.We present a simple tree management algorithm that provides the necessary path diversity and describe an adaptation framework for MDC based on scalable receiver feedback. We evaluate these using MDC applied to real video data coupled with real usage traces from a major news site that experienced a large flash crowd for live streaming content. Our results show very significant benefits in using multiple distribution trees and MDC, with a 22 dB improvement in PSNR in some cases.
Extending the mathematics in qualitative process theory. Reasoning about physical systems requires the integration of a range of knowledge and reasoning techniques. P. Hayes has named the enterprise of identifying and formalizing the common-sense knowledge people use for this task “naive physics.” Qualitative Process theory by K. Forbus proposes a structure and some of the content of naive theories about dynamics, (i.e., the way things change in a physical situation). Any physical theory, however, rests on an underlying mathematics. QP theory assumes a qualitative mathematics which captures only simple topological relationships between values of continuous parameters. While the results are impressive, this mathematics is unable to support the full range of deduction needed for a complete naive physics reasoner. A more complete naive mathematics must be capable of representing measure information about parameter values as well as shape and strength characterizations of the partial derivatives relating these values. This article proposes a naive mathematics meeting these requirements, and shows that it considerably expands the scope and power of deductions which QP theory can perform. © 1989 Wiley Periodicals, Inc.
Correlation-aware statistical timing analysis with non-Gaussian delay distributions Process variations have a growing impact on circuit performance for today's integrated circuit (IC) technologies. The non-Gaussian delay distributions as well as the correlations among delays make statistical timing analysis more challenging than ever. In this paper, the authors presented an efficient block-based statistical timing analysis approach with linear complexity with respect to the circuit size, which can accurately predict non-Gaussian delay distributions from realistic nonlinear gate and interconnect delay models. This approach accounts for all correlations, from manufacturing process dependence, to re-convergent circuit paths to produce more accurate statistical timing predictions. With this approach, circuit designers can have increased confidence in the variation estimates, at a low additional computation cost.
Efficient face candidates selector for face detection In this paper an efficient face candidates selector is proposed for face detection tasks in still gray level images. The proposed method acts as a selective attentional mechanism. Eye-analogue segments at a given scale are discovered by finding regions which are roughly as large as real eyes and are darker than their neighborhoods. Then a pair of eye-analogue segments are hypothesized to be eyes in a face and combined into a face candidate if their placement is consistent with the anthropological characteristic of human eyes. The proposed method is robust in that it can deal with illumination changes and moderate rotations. A subset of the FERET data set and the BioID face database are used to evaluate the proposed method. The proposed face candidates selector is successful in 98.75% and 98.6% cases, respectively.
A 3DTV Broadcasting Scheme for High-Quality Stereoscopic Content Over a Hybrid Network Various methods are used to provide three-dimensional television (3DTV) services over a terrestrial broadcasting network. However, these services cannot provide high-quality 3D content to consumers. It is because current terrestrial broadcast networks are allocated with limited bandwidths to transmit 3D content while 3D content requires larger bandwidth compared to that of 2D content. To overcome this limitation, this paper proposes a hybrid 3DTV broadcasting system, which utilizes both a terrestrial broadcast network and a broadband network. In the proposed system, two elementary streams of left and right views for a stereoscopic video service are transmitted over a terrestrial broadcasting network and a broadband network, respectively. In addition, the proposed system suggests a new mechanism for synchronization between these two elementary streams. The proposed scheme can provide high-quality 3DTV service regardless of bandwidth of the terrestrial broadcast network while maintaining backward compatibility with a 2D DTV broadcasting service.
1.013488
0.013668
0.013668
0.013668
0.011372
0.006904
0.005332
0.001566
0.000218
0.000014
0
0
0
0
Predicting circuit performance using circuit-level statistical timing analysis Recognizing that the delay of a circuit is extremely sensitive to manufacturing process variations, this paper proposes a methodology for statistical timing analysis. The authors present a triple-node delay model which inherently captures the effect of input transition time on the gate delays. Response surface methods are used so that the statistical gate delays are generated efficiently. A new path sensitization criterion based on the minimum propagatable pulse width (MPPW) of the gates along a path is used to check for false paths. The overlap of a path with longer paths determines its “statistical significance” to the overall circuit delay. Finally, the circuit delay probability density function is computed by performing a Monte Carlo simulation on the statistically significant path set
On path-based learning and its applications in delay test and diagnosis This paper describes the implementation of a novel path-based learning methodology that can be applied for two purposes: (1) In a pre-silicon simulation environment, path-based learning can be used to produce a fast and approximate simulator for statistical timing simulation. (2) In post-silicon phase, path-based learning can be used as a vehicle to derive critical paths based on the pass/fail behavior observed from the test chips. Our path-based learning methodology consists of four major components: a delay test pattern set, a logic simulator, a set of selected paths as the basis for learning, and a machine learner. We explain the key concepts in this methodology and present experimental results to demonstrate its feasibility and applications.
VGTA: Variation Aware Gate Timing Analysis As technology scales down, timing verification of digital integrated circuits becomes an extremely difficult task due to gate and wire variability. Therefore, statistical timing analysis is inevitable. Most timing tools divide the analysis into two parts: 1) interconnect (wire) timing analysis and 2) gate timing analysis. Variational interconnect delay calculation for blockbased TA has been recently studied. However, variational gate delay calculation has remained unexplored. In this paper, we propose a new framework to handle the variation-aware gate timing analysis in block-based TA. First, we present an approach to approximate variational RC- load by using a canonical first-order model. Next, an efficient variation-aware effective capacitance calculation based on statistical input transition, statistical gate timing library, and statistical RC- load is presented. In this step, we use a single-iteration Ceff calculation which is efficient and reasonably accurate. Finally we calculate the statistical gate delay and output slew based on the aforementioned model. Experimental results show an average error of 7% for gate delay and output slew with respect to the HSPICE Monte Carlo simulation while the runtime is about 145 times faster.
A probabilistic analysis of pipelined global interconnect under process variations The main thesis of this paper is to perform a reliability based performance analysis for a shared latch inserted global interconnect under uncertainty. We first put forward a novel delay metric named DMA for estimation of interconnect delay probability density function considering process variations. Without considerable loss in accuracy, DMA can achieve high computational efficiency even in a large space of random variables. We then propose a comprehensive probabilistic methodology for sampling transfers, on a shared latch inserted global interconnect, that highly improves the reliability of the interconnect. Improvements up to 125% are observed in the reliability when compared to deterministic sampling approach. It is also shown that dual phase clocking scheme for pipelined global interconnect is able to meet more stringent timing constraints due to its lower latency.
Multipoint moment matching model for multiport distributed interconnect networks We provide a multipoint moment matching model for multiport distributed interconnect networks. We introduce a new concept: integrated congruence transform which can be applied to the partial differential equations of a distributed line and generate a passive finite order system as its model. Moreover, we also provide an efficient algorithm based on the L/sup 2/ Hilbert space theory so that exact moment matching at multiple points can be obtained.
PRIMO: probability interpretation of moments for delay calculation Moments of the impulse response are widely used for interconnect delay analysis, from the explicit Elmore delay (first moment of the impulse response) expression, to moment matching methods which create reduced order transimpedance and transfer function approximations. However, the Elmore delay is fast becoming ineffective for deep submicron technologies, and reduced order transfer function delays are impractical for use as early-phase design metrics or as design optimization cost functions. This paper describes an approach for fitting moments of the impulse response to probability density functions so that delays can be estimated from probability tables. For RC trees it is demonstrated that the incomplete gamma function provides a provably stable approximation. The step response delay is obtained from a one-dimensional table lookup.
Spectral Polynomial Chaos Solutions of the Stochastic Advection Equation We present a new algorithm based on Wiener–Hermite functionals combined with Fourier collocation to solve the advection equation with stochastic transport velocity. We develop different stategies of representing the stochastic input, and demonstrate that this approach is orders of magnitude more efficient than Monte Carlo simulations for comparable accuracy.
Design time body bias selection for parametric yield improvement Circuits designed in aggressively scaled technologies face both stringent power constraints and increased process variability. Achieving high parametric yield is a key design objective, but is complicated by the correlation between power and performance. This paper proposes a novel design time body bias selection framework for parametric yield optimization while reducing testing costs. The framework considers both inter- and intra-die variations as well as power-performance correlations. This approach uses a feature extraction technique to explore the underlying similarity between the gates for effective clustering. Once the gates are clustered, a Gaussian quadrature based model is applied for fast yield analysis and optimization. This work also introduces an incremental method for statistical power computation to further reduce the optimization complexity. The proposed framework improves parametric yield from 39% to 80% on average for 11 benchmark circuits while runtime is linear with circuit size and on the order of minutes for designs with up to 15 K gates.
Impact of interconnect variations on the clock skew of a gigahertz microprocessor Due to the large die sizes and tight relative clock skew margins, the impact of interconnect manufacturing variations on the clock skew in today's gigahertz microprocessors can no longer be ignored. Unlike manufacturing variations in the devices, the impact of the interconnect manufacturing variations on IC timing performance cannot be captured by worst/best case corner point methods. Thus it is difficult to estimate the clock skew variability due to interconnect variations. In this paper we analyze the timing impact of several key statistically independent interconnect variations in a context-dependent manner by applying a previously reported interconnect variational order-reduction technique. The results show that the interconnect variations can cause up to 25% clock skew variability in a modern microprocessor design.
Wavelet balance approach for steady-state analysis in nonlinear circuits In this paper, a novel wavelet-balance method is proposed for steady-state analysis of nonlinear circuits. Taking advantage of the supe- rior computational properties of wavelets, the proposed method presents several merits compared with those conventional frequency-domain tech- niques. First, it has a high convergence rate , where is the step length. Second, it works in time domain so that many critical problems in frequency domain, such as nonlinearity and high order harmonics, can be handled efficiently. Third, an adaptive scheme exists to automatically se- lect proper wavelet basis functions needed at a given accuracy. Numerical experiments further prove the promising features of the proposed method in solving steady-state problems. I. INTRODUCTION A major difficulty in time-domain simulation of nonlinear circuits, such as power supplies, high- amplifiers, modulators and oscillators, etc., is that the transient response may stand for quite a long time before the steady-state is reached. This problem makes it infeasible to calcu- late the steady-state response by conventional transient simulation al- gorithms because direct integration of the circuit equations throughout the transients consumes unbearable computing time. During the past several decades, a great number of techniques have been developed to solve the periodic steady-state problem (1)-(11), which may be categorized into three classes: shooting methods (1)-(4), harmonic balance methods (5)-(10) and sample balance methods (11). The shooting methods attempt to find a set of initial conditions satis- fying the two-point boundary constraint, such that the circuit starts in periodic steady state directly. However, the shooting methods consume expensive computing time since they require to numerically integrate the system equations time after time. The harmonic balance methods assume the circuit solutions in the form of Fourier series. Moreover, they divide the circuit into a linear and a nonlinear part so that the linear subnetwork could be solved efficiently in frequency domain. Un- fortunately, the harmonic balance methods need to repeatedly execute DFT and IDFT operations during the solution process, and employ a large number of harmonic components to achieve an accurate simu- lation result. Therefore, they also expend substantial computing time. The sample balance methods directly approximate the time-domain
A stochastic variational multiscale method for diffusion in heterogeneous random media A stochastic variational multiscale method with explicit subgrid modelling is provided for numerical solution of stochastic elliptic equations that arise while modelling diffusion in heterogeneous random media. The exact solution of the governing equations is split into two components: a coarse-scale solution that can be captured on a coarse mesh and a subgrid solution. A localized computational model for the subgrid solution is derived for a generalized trapezoidal time integration rule for the coarse-scale solution. The coarse-scale solution is then obtained by solving a modified coarse formulation that takes into account the subgrid model. The generalized polynomial chaos method combined with the finite element technique is used for the solution of equations resulting from the coarse formulation and subgrid models. Finally, various numerical examples are considered for evaluating the method.
A fast approach for overcomplete sparse decomposition based on smoothed l0 norm In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include under-determined sparse component analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the l1 norm using linear programming (LP) techniques, our algorithm tries to directly minimize the l0 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.
NANOLAB: A Tool for Evaluating Reliability of Defect-Tolerant Nano Architectures As silicon manufacturing technology reaches the nanoscale, architectural designs need to accommodate the uncertainty inherent at such scales. These uncertainties are germane in the miniscule dimension of the devices, quantum physical effects, reduced noise margins, system energy levels reaching computing thermal limits, manufacturing defects, aging and many other factors. Defect tolerant architectures and their reliability measures will gain importance for logic and micro-architecture designs based on nano-scale substrates. Recently, Markov Random Field (MRF) has been proposed as a model of computation for nanoscale logic gates. In this paper, we take this approach further by automating this computational scheme and a Belief Propagation algorithm. We have developed MATLAB based libraries and toolset for fundamental logic gates that can compute output probability distributions and entropies for specified input distributions. Our tool eases evaluation of reliability measures of combinational logic blocks. The effectiveness of this automation is illustrated in this paper by automatically deriving various reliability results for defect-tolerant architectures, such as Triple Modular Redundancy (TMR), Cascaded Triple Modular Redundancy (CTMR) and multi-stage iterations of these. These results are used to analyze trade-offs between reliability and redundancy for these architectural configurations.
Overview of HEVC High-Level Syntax and Reference Picture Management The increasing proportion of video traffic in telecommunication networks puts an emphasis on efficient video compression technology. High Efficiency Video Coding (HEVC) is the forthcoming video coding standard that provides substantial bit rate reductions compared to its predecessors. In the HEVC standardization process, technologies such as picture partitioning, reference picture management, and parameter sets are categorized as “high-level syntax.” The design of the high-level syntax impacts the interface to systems and error resilience, and provides new functionalities. This paper presents an overview of the HEVC high-level syntax, including network abstraction layer unit headers, parameter sets, picture partitioning schemes, reference picture management, and supplemental enhancement information messages.
1.100553
0.100668
0.100668
0.100668
0.100438
0.014529
0.005916
0.000455
0.000141
0.000048
0
0
0
0
The Roles of Fuzzy Logic and Soft Computing in the Conception, Design and Deployment of Intelligent Systems The essence of soft computing is that, unlike the traditional, hard computing, it is aimed at an accommodation with the pervasive imprecision of the real world. Thus, the guiding principle of soft computing is: ‘...exploit the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness, low solution cost and better rapport with reality’. In the final analysis, the role model for soft computing is the human mind.
Adaptive-expectation based multi-attribute FTS model for forecasting TAIEX In recent years, there have been many time series methods proposed for forecasting enrollments, weather, the economy, population growth, and stock price, etc. However, traditional time series, such as ARIMA, expressed by mathematic equations are unable to be easily understood for stock investors. Besides, fuzzy time series can produce fuzzy rules based on linguistic value, which is more reasonable than mathematic equations for investors. Furthermore, from the literature reviews, two shortcomings are found in fuzzy time series methods: (1) they lack persuasiveness in determining the universe of discourse and the linguistic length of intervals, and (2) only one attribute (closing price) is usually considered in forecasting, not multiple attributes (such as closing price, open price, high price, and low price). Therefore, this paper proposes a multiple attribute fuzzy time series (FTS) method, which incorporates a clustering method and adaptive expectation model, to overcome the shortcomings above. In verification, using actual trading data of the Taiwan Stock Index (TAIEX) as experimental datasets, we evaluate the accuracy of the proposed method and compare the performance with the (Chen, 1996 [7], Yu, 2005 [6], and Cheng, Cheng, & Wang, 2008 [20]) methods. The proposed method is superior to the listing methods based on average error percentage (MAER).
A new hybrid approach based on SARIMA and partial high order bivariate fuzzy time series forecasting model In the literature, there have been many studies using fuzzy time series for the purpose of forecasting. The most studied model is the first order fuzzy time series model. In this model, an observation of fuzzy time series is obtained by using the previous observation. In other words, only the first lagged variable is used when constructing the first order fuzzy time series model. Therefore, this model can not be sufficient for some time series such as seasonal time series which is an important class in time series models. Besides, the time series encountered in real life have not only autoregressive (AR) structure but also moving average (MA) structure. The fuzzy time series models available in the literature are AR structured and are not appropriate for MA structured time series. In this paper, a hybrid approach is proposed in order to analyze seasonal fuzzy time series. The proposed hybrid approach is based on partial high order bivariate fuzzy time series forecasting model which is first introduced in this paper. The order of this model is determined by utilizing Box-Jenkins method. In order to show the efficiency of the proposed hybrid method, real time series are analyzed with this method. The results obtained from the proposed method are compared with the other methods. As a result, it is observed that more accurate results are obtained from the proposed hybrid method.
Forecasting innovation diffusion of products using trend-weighted fuzzy time-series model The time-series models have been used to make reasonably accurate predictions in weather forecasting, academic enrolment, stock price, etc. This study proposes a novel method that incorporates trend-weighting into the fuzzy time-series models advanced by Chen's and Yu's method to explore the extent to which the innovation diffusion of ICT products could be adequately described by the proposed procedure. To verify the proposed procedure, the actual DSL (digital subscriber line) data in Taiwan is illustrated, and this study evaluates the accuracy of the proposed procedure by comparing with different innovation diffusion models: Bass model, Logistic model and Dynamic model. The results show that the proposed procedure surpasses the methods listed in terms of accuracy and SSE (Sum of Squares Error).
A hybrid multi-order fuzzy time series for forecasting stock markets This paper proposes a hybrid model based on multi-order fuzzy time series, which employs rough sets theory to mine fuzzy logical relationship from time series and an adaptive expectation model to adjust forecasting results, to improve forecasting accuracy. Two empirical stock markets (TAIEX and NASDAQ) are used as empirical databases to verify the forecasting performance of the proposed model, and two other methodologies, proposed earlier by Chen and Yu, are employed as comparison models. Besides, to compare with conventional statistic method, the partial autocorrelation function and autoregressive models are utilized to estimate the time lags periods within the databases. Based on comparison results, the proposed model can effectively improve the forecasting performance and outperforms the listing models. From the empirical study, the conventional statistic method and the proposed model both have revealed that the estimated time lags for the two empirical databases are one lagged period.
Fuzzy time-series based on adaptive expectation model for TAIEX forecasting Time-series models have been used to make predictions in the areas of stock price forecasting, academic enrollment and weather, etc. However, in stock markets, reasonable investors will modify their forecasts based on recent forecasting errors. Therefore, we propose a new fuzzy time-series model which incorporates the adaptive expectation model into forecasting processes to modify forecasting errors. Using actual trading data from Taiwan Stock Index (TAIEX) and, we evaluate the accuracy of the proposed model by comparing our forecasts with those derived from Chen's [Chen, S. M. (1996). Forecasting enrollments based on fuzzy time-series, Fuzzy Sets and Systems, 81, 311-319] and Yu's [Yu, Hui-Kuang. (2004). Weighted fuzzy time-series models for TAIEX forecasting. Physica A, 349, 609-624] models. The comparison results indicate that our model surpasses in accuracy those suggested by Chen and Yu.
Fuzzy stochastic fuzzy time series and its models In this paper, as an extension of the concept of time series , we will present the definition and models of fuzzy stochastic fuzzy time series (FSFTS), both of whose values and probabilities with which the FSFTS assumes its values are fuzzy sets, and which may not be modeled properly by the concept of time series. To investigate FSFTS, the definition of fuzzy valued probability distributions is considered and discussed. When the FSFTS is time-invariant, several preliminary conclusions are derived.
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.
Duality Theory in Fuzzy Linear Programming Problems with Fuzzy Coefficients The concept of fuzzy scalar (inner) product that will be used in the fuzzy objective and inequality constraints of the fuzzy primal and dual linear programming problems with fuzzy coefficients is proposed in this paper. We also introduce a solution concept that is essentially similar to the notion of Pareto optimal solution in the multiobjective programming problems by imposing a partial ordering on the set of all fuzzy numbers. We then prove the weak and strong duality theorems for fuzzy linear programming problems with fuzzy coefficients.
Randomized rounding: a technique for provably good algorithms and algorithmic proofs We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance.
Compound Linguistic Scale. •Compound Linguistic Scale comprises Compound Linguistic Variable, Fuzzy Normal Distribution and Deductive Rating Strategy.•CLV can produce two dimensional options, i.e. compound linguistic terms, to better reflect the raters’ preferences.•DRS is a double step rating approach for a rater to choose a compound linguistic term among two dimensional options.•FND can efficiently produce a population of fuzzy numbers for a linguistic term set with using a few parameters.•CLS, as a rating interface, can be contributed to various application domains in engineer and social sciences.
Looking for a good fuzzy system interpretability index: An experimental approach Interpretability is acknowledged as the main advantage of fuzzy systems and it should be given a main role in fuzzy modeling. Classical systems are viewed as black boxes because mathematical formulas set the mapping between inputs and outputs. On the contrary, fuzzy systems (if they are built regarding some constraints) can be seen as gray boxes in the sense that every element of the whole system can be checked and understood by a human being. Interpretability is essential for those applications with high human interaction, for instance decision support systems in fields like medicine, economics, etc. Since interpretability is not guaranteed by definition, a huge effort has been done to find out the basic constraints to be superimposed during the fuzzy modeling process. People talk a lot about interpretability but the real meaning is not clear. Understanding of fuzzy systems is a subjective task which strongly depends on the background (experience, preferences, and knowledge) of the person who makes the assessment. As a consequence, although there have been a few attempts to define interpretability indices, there is still not a universal index widely accepted. As part of this work, with the aim of evaluating the most used indices, an experimental analysis (in the form of a web poll) was carried out yielding some useful clues to keep in mind regarding interpretability assessment. Results extracted from the poll show the inherent subjectivity of the measure because we collected a huge diversity of answers completely different at first glance. However, it was possible to find out some interesting user profiles after comparing carefully all the answers. It can be concluded that defining a numerical index is not enough to get a widely accepted index. Moreover, it is necessary to define a fuzzy index easily adaptable to the context of each problem as well as to the user quality criteria.
Interactive group decision-making using a fuzzy linguistic approach for evaluating the flexibility in a supply chain ► This study builds a group decision-making structure model of flexibility in supply chain management development. ► This study presents a framework for evaluating supply chain flexibility. ► This study proposes an algorithm for determining the degree of supply chain flexibility using a new fuzzy linguistic approach. ►This fuzzy linguistic approach has more advantage to preserve no loss of information.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.22
0.22
0.22
0.22
0.076667
0.05
0.001875
0
0
0
0
0
0
0
Perceptual reasoning for perceptual computing: a similarity-based approach Perceptual reasoning (PR) is an approximate reasoning method that can be used as a computing-with-words (CWW) engine in perceptual computing. There can be different approaches to implement PR, e.g., firing-interval-based PR (FI-PR), which has been proposed in J. M. Mendel and D. Wu, IEEE Trans. Fuzzy Syst., vol. 16, no. 6, pp. 1550-1564, Dec. 2008 and similarity-based PR (SPR), which is proposed in this paper. Both approaches satisfy the requirement on a CWW engine that the result of combining fired rules should lead to a footprint of uncertainty (FOU) that resembles the three kinds of FOUs in a CWW codebook. A comparative study shows that S-PR leads to output FOUs that resemble word FOUs, which are obtained from subject data, much more closely than FI-PR; hence, S-PR is a better choice for a CWW engine than FI-PR.
Extension Principle of Interval-Valued Fuzzy Set In this paper, we introduce maximal and minimal extension principles of interval-valued fuzzy set and an axiomatic definition of generalized extension principle of interval-valued fuzzy set and use concepts of cut set of interval valued fuzzy set and interval-valued nested sets to explain their construction procedure in detail. These conclusions can be applied in some fields such as fuzzy algebra, fuzzy analysis and so on.
Type-2 Fuzzy Arithmetic Using Alpha-Planes This paper examines type-2 fuzzy arithmetic using interval analysis. It relies heavily on alpha-cuts and alpha-planes. Furthermore, we discuss the use of quasi type-2 fuzzy sets proposed by Mendel and Liu and define quasi type-2 fuzzy numbers. Arithmetic operations of such numbers are defined and a worked example is presented.
Type-2 Fuzzy Sets as Functions on Spaces For many readers and potential authors, type-2 (T2) fuzzy sets might be more readily understood if expressed by the use of standard mathematical notation and terminology. This paper, therefore, translates constructs associated with T2 fuzzy sets to the language of functions on spaces. Such translations may encourage researchers in different disciplines to investigate T2 fuzzy sets, thereby potentially broadening their application and strengthening the underlying theory.
The role of fuzzy sets in decision sciences: Old techniques and new directions We try to provide a tentative assessment of the role of fuzzy sets in decision analysis. We discuss membership functions, aggregation operations, linguistic variables, fuzzy intervals and the valued preference relations they induce. The importance of the notion of bipolarity and the potential of qualitative evaluation methods are also pointed out. We take a critical standpoint on the state-of-the-art, in order to highlight the actual achievements and question what is often considered debatable by decision scientists observing the fuzzy decision analysis literature.
Application of fuzzy logic to reliability engineering The analysis of system reliability often requires the use of subjective-judgments, uncertain data, and approximate system models. By allowing imprecision and approximate analysis fuzzy logic provides an effective tool for characterizing system reliability in these circumstances; it does not force precision where it is not possible. Here we apply the main concepts of fuzzy logic, fuzzy arithmetic and linguistic variables to the analysis of system structures, fault trees, event trees, the reliability of degradable systems, and the assessment of system criticality based on the severity of a failure and its probability of occurrence
Designing Type-1 and Type-2 Fuzzy Logic Controllers via Fuzzy Lyapunov Synthesis for nonsmooth mechanical systems In this paper, Fuzzy Lyapunov Synthesis is extended to the design of Type-1 and Type-2 Fuzzy Logic Controllers for nonsmooth mechanical systems. The output regulation problem for a servomechanism with nonlinear backlash is proposed as a case of study. The problem at hand is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of the nonminimum phase properties of the system. Performance issues of the Type-1 and Type-2 Fuzzy Logic Regulators that were designed are illustrated in experimental studies.
Multivariate modeling and type-2 fuzzy sets This paper explores the link between type-2 fuzzy sets and multivariate modeling. Elements of a space X are treated as observations fuzzily associated with values in a multivariate feature space. A category or class is likewise treated as a fuzzy allocation of feature values (possibly dependent on values in X). We observe that a type-2 fuzzy set on X generated by these two fuzzy allocations captures imprecision in the class definition and imprecision in the observations. In practice many type-2 fuzzy sets are in fact generated in this way and can therefore be interpreted as the output of a classification task. We then show that an arbitrary type-2 fuzzy set can be so constructed, by taking as a feature space a set of membership functions on X. This construction presents a new perspective on the Representation Theorem of Mendel and John. The multivariate modeling underpinning the type-2 fuzzy sets can also constrain realizable forms of membership functions. Because averaging operators such as centroid and subsethood on type-2 fuzzy sets involve a search for optima over membership functions, constraining this search can make computation easier and tighten the results. We demonstrate how the construction can be used to combine representations of concepts and how it therefore provides an additional tool, alongside standard operations such as intersection and subsethood, for concept fusion and computing with words.
A new evaluation model for intellectual capital based on computing with linguistic variable In a knowledge era, intellectual capital has become a determinant resource for enterprise to retain and improve competitive advantage. Because the nature of intellectual capital is abstract, intangible, and difficult to measure, it becomes a challenge for business managers to evaluate intellectual capital performance effectively. Recently, several methods have been proposed to assist business managers in evaluating performance of intellectual capital. However, they also face information loss problems while the processes of subjective evaluation integration. Therefore, this paper proposes a suitable model for intellectual capital performance evaluation by combining 2-tuple fuzzy linguistic approach with multiple criteria decision-making (MCDM) method. It is feasible to manipulate the processes of evaluation integration and avoid the information loss effectively. Based on the proposed model, its feasibility is demonstrated by the result of intellectual capital performance evaluation for a high-technology company in Taiwan.
An overview on the 2-tuple linguistic model for computing with words in decision making: Extensions, applications and challenges Many real world problems need to deal with uncertainty, therefore the management of such uncertainty is usually a big challenge. Hence, different proposals to tackle and manage the uncertainty have been developed. Probabilistic models are quite common, but when the uncertainty is not probabilistic in nature other models have arisen such as fuzzy logic and the fuzzy linguistic approach. The use of linguistic information to model and manage uncertainty has given good results and implies the accomplishment of processes of computing with words. A bird's eye view in the recent specialized literature about linguistic decision making, computing with words, linguistic computing models and their applications shows that the 2-tuple linguistic representation model [44] has been widely-used in the topic during the last decade. This use is because of reasons such as, its accuracy, its usefulness for improving linguistic solving processes in different applications, its interpretability, its ease managing of complex frameworks in which linguistic information is included and so forth. Therefore, after a decade of extensive and intensive successful use of this model in computing with words for different fields, it is the right moment to overview the model, its extensions, specific methodologies, applications and discuss challenges in the topic.
Preservation Of Properties Of Interval-Valued Fuzzy Relations The goal of this paper is to consider properties of the composition of interval-valued fuzzy relations which were introduced by L.A. Zadeh in 1975. Fuzzy set theory turned out to be a useful tool to describe situations in which the data are imprecise or vague. Interval-valued fuzzy set theory is a generalization of fuzzy set theory which was introduced also by Zadeh in 1965. This paper generalizes some properties of interval matrices considered by Pekala (2007) on these of interval-valued fuzzy relations.
SART-type image reconstruction from a limited number of projections with the sparsity constraint. Based on the recent mathematical findings on solving the linear inverse problems with sparsity constraints by Daubechiesx et al., here we adapt a simultaneous algebraic reconstruction technique (SART) for image reconstruction from a limited number of projections subject to a sparsity constraint in terms of an invertible compression transform. The algorithm is implemented with an exemplary Haar wavelet transform and tested with a modified Shepp-Logan phantom. Our preliminary results demonstrate that the sparsity constraint helps effectively improve the quality of reconstructed images and reduce the number of necessary projections.
Compressive Acquisition of Dynamic Scenes Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models infeasible. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, from which the image frames are then reconstructed. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to considerably lower the compressive measurement rate considerably. We validate our approach with a range of experiments including classification experiments that highlight the effectiveness of the proposed approach.
The performance evaluation of a spectrum sensing implementation using an automatic modulation classification detection method with a Universal Software Radio Peripheral Based on the inherent capability of automatic modulation classification (AMC), a new spectrum sensing method is proposed in this paper that can detect all forms of primary users' signals in a cognitive radio environment. The study presented in this paper focuses on the sensing of some combined analog and digitally primary modulated signals. In achieving this objective, a combined analog and digital automatic modulation classifier was developed using an artificial neural network (ANN). The ANN classifier was combined with a GNU Radio and Universal Software Radio Peripheral version 2 (USRP2) to develop the Cognitive Radio Engine (CRE) for detecting primary users' signals in a cognitive radio environment. The detailed information on the development and performance of the CRE are presented in this paper. The performance evaluation of the developed CRE shows that the engine can reliably detect all the primary modulated signals considered. Comparative performance evaluation carried out on the detection method presented in this paper shows that the proposed detection method performs favorably against the energy detection method currently acclaimed the best detection method. The study results reveal that a single detection method that can reliably detect all forms of primary radio signals in a cognitive radio environment, can only be developed if a feature common to all radio signals is used in its development rather than using features that are peculiar to certain signal types only.
1.10392
0.105746
0.026437
0.015367
0.007082
0.0025
0.000565
0.000235
0.000122
0.000046
0.000003
0
0
0
Cardinality-based fuzzy time series for forecasting enrollments Forecasting activities are frequent and widespread in our life. Since Song and Chissom proposed the fuzzy time series in 1993, many previous studies have proposed variant fuzzy time series models to deal with uncertain and vague data. A drawback of these models is that they do not consider appropriately the weights of fuzzy relations. This paper proposes a new method to build weighted fuzzy rules by computing cardinality of each fuzzy relation to solve above problems. The proposed method is able to build the weighted fuzzy rules based on concept of large itemsets of Apriori. The yearly data on enrollments at the University of Alabama are adopted to verify and evaluate the performance of the proposed method. The forecasting accuracies of the proposed method are better than other methods.
Adaptive-expectation based multi-attribute FTS model for forecasting TAIEX In recent years, there have been many time series methods proposed for forecasting enrollments, weather, the economy, population growth, and stock price, etc. However, traditional time series, such as ARIMA, expressed by mathematic equations are unable to be easily understood for stock investors. Besides, fuzzy time series can produce fuzzy rules based on linguistic value, which is more reasonable than mathematic equations for investors. Furthermore, from the literature reviews, two shortcomings are found in fuzzy time series methods: (1) they lack persuasiveness in determining the universe of discourse and the linguistic length of intervals, and (2) only one attribute (closing price) is usually considered in forecasting, not multiple attributes (such as closing price, open price, high price, and low price). Therefore, this paper proposes a multiple attribute fuzzy time series (FTS) method, which incorporates a clustering method and adaptive expectation model, to overcome the shortcomings above. In verification, using actual trading data of the Taiwan Stock Index (TAIEX) as experimental datasets, we evaluate the accuracy of the proposed method and compare the performance with the (Chen, 1996 [7], Yu, 2005 [6], and Cheng, Cheng, & Wang, 2008 [20]) methods. The proposed method is superior to the listing methods based on average error percentage (MAER).
The Roles of Fuzzy Logic and Soft Computing in the Conception, Design and Deployment of Intelligent Systems The essence of soft computing is that, unlike the traditional, hard computing, it is aimed at an accommodation with the pervasive imprecision of the real world. Thus, the guiding principle of soft computing is: ‘...exploit the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness, low solution cost and better rapport with reality’. In the final analysis, the role model for soft computing is the human mind.
Forecasting the number of outpatient visits using a new fuzzy time series based on weighted-transitional matrix Forecasting the number of outpatient visits can help the expert of healthcare administration to make a strategic decision. If the number of outpatient visits could be forecast accurately, it would provide the administrators of healthcare with a basis to manage hospitals effectively, to make up a schedule for human resources and finances reasonably, and distribute hospital material resources suitably. This paper proposes a new fuzzy time series method, which is based on weighted-transitional matrix, also proposes two new forecasting methods: the Expectation Method and the Grade-Selection Method. From the verification and results, the proposed methods exhibit a relatively lower error rate in comparison to the listing methods, and could be more stable in facing the ever-changing future trends. The characteristics of the proposed methods could overcome the drawback of the insufficient handling of information to construct a forecasting rule in previous researches.
A hybrid forecasting model for enrollments based on aggregated fuzzy time series and particle swarm optimization In this paper, a new forecasting model based on two computational methods, fuzzy time series and particle swarm optimization, is presented for academic enrollments. Most of fuzzy time series forecasting methods are based on modeling the global nature of the series behavior in the past data. To improve forecasting accuracy of fuzzy time series, the global information of fuzzy logical relationships is aggregated with the local information of latest fuzzy fluctuation to find the forecasting value in fuzzy time series. After that, a new forecasting model based on fuzzy time series and particle swarm optimization is developed to adjust the lengths of intervals in the universe of discourse. From the empirical study of forecasting enrollments of students of the University of Alabama, the experimental results show that the proposed model gets lower forecasting errors than those of other existing models including both training and testing phases.
Fuzzy dual-factor time-series for stock index forecasting There is an old Wall Street adage goes, ''It takes volume to make price move''. The contemporaneous relation between trading volume and stock returns has been studied since stock markets were first opened. Recent researchers such as Wang and Chin [Wang, C. Y., & Chin S. T. (2004). Profitability of return and volume-based investment strategies in China's stock market. Pacific-Basin Finace Journal, 12, 541-564], Hodgson et al. [Hodgson, A., Masih, A. M. M., & Masih, R. (2006). Futures trading volume as a determinant of prices in different momentum phases. International Review of Financial Analysis, 15, 68-85], and Ting [Ting, J. J. L. (2003). Causalities of the Taiwan stock market. Physica A, 324, 285-295] have found the correlation between stock volume and price in stock markets. To verify this saying, in this paper, we propose a dual-factor modified fuzzy time-series model, which take stock index and trading volume as forecasting factors to predict stock index. In empirical analysis, we employ the TAIEX (Taiwan stock exchange capitalization weighted stock index) and NASDAQ (National Association of Securities Dealers Automated Quotations) as experimental datasets and two multiple-factor models, Chen's [Chen, S. M. (2000). Temperature prediction using fuzzy time-series. IEEE Transactions on Cybernetics, 30 (2), 263-275] and Huarng and Yu's [Huarng, K. H., & Yu, H. K. (2005). A type 2 fuzzy time-series model for stock index forecasting. Physica A, 353, 445-462], as comparison models. The experimental results indicate that the proposed model outperforms the listing models and the employed factors, stock index and the volume technical indicator, VR(t), are effective in stock index forecasting.
Systematic image processing for diagnosing brain tumors: A Type-II fuzzy expert system approach This paper presents a systematic Type-II fuzzy expert system for diagnosing the human brain tumors (Astrocytoma tumors) using T"1-weighted Magnetic Resonance Images with contrast. The proposed Type-II fuzzy image processing method has four distinct modules: Pre-processing, Segmentation, Feature Extraction, and Approximate Reasoning. We develop a fuzzy rule base by aggregating the existing filtering methods for Pre-processing step. For Segmentation step, we extend the Possibilistic C-Mean (PCM) method by using the Type-II fuzzy concepts, Mahalanobis distance, and Kwon validity index. Feature Extraction is done by Thresholding method. Finally, we develop a Type-II Approximate Reasoning method to recognize the tumor grade in brain MRI. The proposed Type-II expert system has been tested and validated to show its accuracy in the real world. The results show that the proposed system is superior in recognizing the brain tumor and its grade than Type-I fuzzy expert systems.
Interval type-2 fuzzy logic systems: theory and design We present the theory and design of interval type-2 fuzzy logic systems (FLSs). We propose an efficient and simplified method to compute the input and antecedent operations for interval type-2 FLSs: one that is based on a general inference formula for them. We introduce the concept of upper and lower membership functions (MFs) and illustrate our efficient inference method for the case of Gaussian primary MFs. We also propose a method for designing an interval type-2 FLS in which we tune its parameters. Finally, we design type-2 FLSs to perform time-series forecasting when a nonstationary time-series is corrupted by additive noise where SNR is uncertain and demonstrate an improved performance over type-1 FLSs
Subsethood, entropy, and cardinality for interval-valued fuzzy sets---An algebraic derivation In this paper a unified formulation of subsethood, entropy, and cardinality for interval-valued fuzzy sets (IVFSs) is presented. An axiomatic skeleton for subsethood measures in the interval-valued fuzzy setting is proposed, in order for subsethood to reduce to an entropy measure. By exploiting the equivalence between the structures of IVFSs and Atanassov's intuitionistic fuzzy sets (A-IFSs), the notion of average possible cardinality is presented and its connection to least and biggest cardinalities, proposed in [E. Szmidt, J. Kacprzyk, Entropy for intuitionistic fuzzy sets, Fuzzy Sets and Systems 118 (2001) 467-477], is established both algebraically and geometrically. A relation with the cardinality of fuzzy sets (FSs) is also demonstrated. Moreover, the entropy-subsethood and interval-valued fuzzy entropy theorems are stated and algebraically proved, which generalize the work of Kosko [Fuzzy entropy and conditioning, Inform. Sci. 40(2) (1986) 165-174; Fuzziness vs. probability, International Journal of General Systems 17(2-3) (1990) 211-240; Neural Networks and Fuzzy Systems, Prentice-Hall International, Englewood Cliffs, NJ, 1992; Intuitionistic Fuzzy Sets: Theory and Applications, Vol. 35 of Studies in Fuzziness and Soft Computing, Physica-Verlag, Heidelberg, 1999] for FSs. Finally, connections of the proposed subsethood and entropy measures for IVFSs with corresponding definitions for FSs and A-IFSs are provided.
Cooperative spectrum sensing in cognitive radio networks: A survey Spectrum sensing is a key function of cognitive radio to prevent the harmful interference with licensed users and identify the available spectrum for improving the spectrum's utilization. However, detection performance in practice is often compromised with multipath fading, shadowing and receiver uncertainty issues. To mitigate the impact of these issues, cooperative spectrum sensing has been shown to be an effective method to improve the detection performance by exploiting spatial diversity. While cooperative gain such as improved detection performance and relaxed sensitivity requirement can be obtained, cooperative sensing can incur cooperation overhead. The overhead refers to any extra sensing time, delay, energy, and operations devoted to cooperative sensing and any performance degradation caused by cooperative sensing. In this paper, the state-of-the-art survey of cooperative sensing is provided to address the issues of cooperation method, cooperative gain, and cooperation overhead. Specifically, the cooperation method is analyzed by the fundamental components called the elements of cooperative sensing, including cooperation models, sensing techniques, hypothesis testing, data fusion, control channel and reporting, user selection, and knowledge base. Moreover, the impacting factors of achievable cooperative gain and incurred cooperation overhead are presented. The factors under consideration include sensing time and delay, channel impairments, energy efficiency, cooperation efficiency, mobility, security, and wideband sensing issues. The open research challenges related to each issue in cooperative sensing are also discussed.
Fault-tolerance in the Borealis distributed stream processing system We present a replication-based approach to fault-tolerant distributed stream processing in the face of node failures, network failures, and network partitions. Our approach aims to reduce the degree of inconsistency in the system while guaranteeing that available inputs capable of being processed are processed within a specified time threshold. This threshold allows a user to trade availability for consistency: a larger time threshold decreases availability but limits inconsistency, while a smaller threshold increases availability but produces more inconsistent results based on partial data. In addition, when failures heal, our scheme corrects previously produced results, ensuring eventual consistency.Our scheme uses a data-serializing operator to ensure that all replicas process data in the same order, and thus remain consistent in the absence of failures. To regain consistency after a failure heals, we experimentally compare approaches based on checkpoint/redo and undo/redo techniques and illustrate the performance trade-offs between these schemes.
Efficient harmonic balance simulation using multi-level frequency decomposition Efficient harmonic balance (HB) simulation provides a useful tool for the design of RF and microwave integrated circuits. For practical circuits that can contain strong nonlinearities, however, HB problems cannot be solved reliably or efficiently using conventional techniques. Various preconditioning techniques have been proposed to facilitate a robust and efficient analysis based on Krylov subspace linear solvers. In This work we introduce a multi-level frequency domain preconditioner based on a hierarchical frequency decomposition approach. At each Newton iteration, we recursively solve a set of smaller problems to provide an effective preconditioner for the large linearized HB problem. Compared to the standard single-level block diagonal preconditioner, our experiments indicate that our approach provides a more robust, memory efficient solution while offering a 2-9/spl times/ speedup for several strongly nonlinear HB problems in our experiments.
Time Series Compressibility and Privacy In this paper we study the trade-offs between time series compressibility and partial information hiding and their fun- damental implications on how we should introduce uncer- tainty about individual values by perturbing them. More specifically, if the perturbation does not have the same com- pressibility properties as the original data, then it can be detected and filtered out, reducing uncertainty. Thus, by making the perturbation "similar" to the original data, we can both preserve the structure of the data better, while simultaneously making breaches harder. However, as data become more compressible, a fraction of the uncertainty can be removed if true values are leaked, revealing how they were perturbed. We formalize these notions, study the above trade-offs on real data and develop practical schemes which strike a good balance and can also be extended for on-the-fly data hiding in a streaming environment.
Implementing Competitive Learning in a Quantum System Ideas from quantum computation are applied to the field of neural networks to produce competitive learning in a quantum system. The resulting quantum competitive learner has a prototype storage capacity that is exponentially greater than that of its classical counterpart. Further, empirical results from simulation of the quantum competitive learning system on real-world data sets demonstrate the quantum system's potential for excellent performance.
1.076452
0.076
0.076
0.076
0.039718
0.022243
0.002222
0.00007
0.000002
0
0
0
0
0
Estimating human pose from occluded images We address the problem of recovering 3D human pose from single 2D images, in which the pose estimation problem is formulated as a direct nonlinear regression from image observation to 3D joint positions. One key issue that has not been addressed in the literature is how to estimate 3D pose when humans in the scenes are partially or heavily occluded. When occlusions occur, features extracted from image observations (e.g., silhouettes-based shape features, histogram of oriented gradient, etc.) are seriously corrupted, and consequently the regressor (trained on un-occluded images) is unable to estimate pose states correctly. In this paper, we present a method that is capable of handling occlusions using sparse signal representations, in which each test sample is represented as a compact linear combination of training samples. The sparsest solution can then be efficiently obtained by solving a convex optimization problem with certain norms (such as l1-norm). The corrupted test image can be recovered with a sparse linear combination of un-occluded training images which can then be used for estimating human pose correctly (as if no occlusions exist). We also show that the proposed approach implicitly performs relevant feature selection with un-occluded test images. Experimental results on synthetic and real data sets bear out our theory that with sparse representation 3D human pose can be robustly estimated when humans are partially or heavily occluded in the scenes.
Task-Driven Dictionary Learning Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
Multitask dictionary learning and sparse representation based single-image super-resolution reconstruction Recent researches have shown that the sparse representation based technology can lead to state of art super-resolution image reconstruction (SRIR) result. It relies on the idea that the low-resolution (LR) image patches can be regarded as down sampled version of high-resolution (HR) images, whose patches are assumed to have a sparser presentation with respect to a dictionary of prototype patches. In order to avoid a large training patches database and obtain more accurate recovery of HR images, in this paper we introduce the concept of examples-aided redundant dictionary learning into the single-image super-resolution reconstruction, and propose a multiple dictionaries learning scheme inspired by multitask learning. Compact redundant dictionaries are learned from samples classified by K-means clustering in order to provide each sample a more appropriate dictionary for image reconstruction. Compared with the available SRIR methods, the proposed method has the following characteristics: (1) introducing the example patches-aided dictionary learning in the sparse representation based SRIR, in order to reduce the intensive computation complexity brought by enormous dictionary, (2) using the multitask learning and prior from HR image examples to reconstruct similar HR images to obtain better reconstruction result and (3) adopting the offline dictionaries learning and online reconstruction, making a rapid reconstruction possible. Some experiments are taken on testing the proposed method on some natural images, and the results show that a small set of randomly chosen raw patches from training images and small number of atoms can produce good reconstruction result. Both the visual result and the numerical guidelines prove its superiority to some start-of-art SRIR methods.
Efficient Recovery of Jointly Sparse Vectors Use the "Report an Issue" link to request a name change.
Methodology for analysis of TSV stress induced transistor variation and circuit performance As continued scaling becomes increasingly difficult, 3D integration with through silicon vias (TSVs) has emerged as a viable solution to achieve higher bandwidth and power efficiency. Mechanical stress induced by thermal mismatch between TSVs and the silicon bulk arising during wafer fabrication and 3D integration, is a key constraint. In this work, we propose a complete flow to characterize the influence of TSV stress on transistor and circuit performance. First, we analyze the thermal stress contour near the silicon surface with single and multiple TSVs through both finite element analysis (FEA) and linear superposition methods. Then, the biaxial stress is converted to mobility and threshold voltage variations depending on transistor type and geometric relation between TSVs and transistors. Next, we propose an efficient algorithm to calculate circuit variation corresponding to TSV stress based on a grid partition approach. Finally, we discuss a TSV pattern optimization strategy, and employ a series of 17-stage ring oscillators using 40 nm CMOS technology as a test case for the proposed approach.
Beyond streams and graphs: dynamic tensor analysis How do we find patterns in author-keyword associations, evolving over time? Or in Data Cubes, with product-branch-customer sales information? Matrix decompositions, like principal component analysis (PCA) and variants, are invaluable tools for mining, dimensionality reduction, feature selection, rule identification in numerous settings like streaming data, text, graphs, social networks and many more. However, they have only two orders, like author and keyword, in the above example.We propose to envision such higher order data as tensors,and tap the vast literature on the topic. However, these methods do not necessarily scale up, let alone operate on semi-infinite streams. Thus, we introduce the dynamic tensor analysis (DTA) method, and its variants. DTA provides a compact summary for high-order and high-dimensional data, and it also reveals the hidden correlations. Algorithmically, we designed DTA very carefully so that it is (a) scalable, (b) space efficient (it does not need to store the past) and (c) fully automatic with no need for user defined parameters. Moreover, we propose STA, a streaming tensor analysis method, which provides a fast, streaming approximation to DTA.We implemented all our methods, and applied them in two real settings, namely, anomaly detection and multi-way latent semantic indexing. We used two real, large datasets, one on network flow data (100GB over 1 month) and one from DBLP (200MB over 25 years). Our experiments show that our methods are fast, accurate and that they find interesting patterns and outliers on the real datasets.
Generalized spectral decomposition for stochastic nonlinear problems We present an extension of the generalized spectral decomposition method for the resolution of nonlinear stochastic problems. The method consists in the construction of a reduced basis approximation of the Galerkin solution and is independent of the stochastic discretization selected (polynomial chaos, stochastic multi-element or multi-wavelets). Two algorithms are proposed for the sequential construction of the successive generalized spectral modes. They involve decoupled resolutions of a series of deterministic and low-dimensional stochastic problems. Compared to the classical Galerkin method, the algorithms allow for significant computational savings and require minor adaptations of the deterministic codes. The methodology is detailed and tested on two model problems, the one-dimensional steady viscous Burgers equation and a two-dimensional nonlinear diffusion problem. These examples demonstrate the effectiveness of the proposed algorithms which exhibit convergence rates with the number of modes essentially dependent on the spectrum of the stochastic solution but independent of the dimension of the stochastic approximation space.
Statistical static timing analysis: how simple can we get? With an increasing trend in the variation of the primary parameters affecting circuit performance, the need for statistical static timing analysis (SSTA) has been firmly established in the last few years. While it is generally accepted that a timing analysis tool should handle parameter variations, the benefits of advanced SSTA algorithms are still questioned by the designer community because of their significant impact on complexity of STA flows. In this paper, we present convincing evidence that a path-based SSTA approach implemented as a post-processing step captures the effect of parameter variations on circuit performance fairly accurately. On a microprocessor block implemented in 90nm technology, the error in estimating the standard deviation of the timing margin at the inputs of sequential elements is at most 0.066 FO4 delays, which translates in to only 0.31% of worst case path delay.
Compressive Sampling and Lossy Compression Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar ...
Aging analysis at gate and macro cell level Aging, which can be regarded as a time-dependent variability, has until recently not received much attention in the field of electronic design automation. This is changing because increasing reliability costs threaten the continued scaling of ICs. We investigate the impact of aging effects on single combinatorial gates and present methods that help to reduce the reliability costs by accurately analyzing the performance degradation of aged circuits at gate and macro cell level.
Compound Linguistic Scale. •Compound Linguistic Scale comprises Compound Linguistic Variable, Fuzzy Normal Distribution and Deductive Rating Strategy.•CLV can produce two dimensional options, i.e. compound linguistic terms, to better reflect the raters’ preferences.•DRS is a double step rating approach for a rater to choose a compound linguistic term among two dimensional options.•FND can efficiently produce a population of fuzzy numbers for a linguistic term set with using a few parameters.•CLS, as a rating interface, can be contributed to various application domains in engineer and social sciences.
Fundamentals Of Clinical Methodology: 2. Etiology The concept of etiology is analyzed and the possibilities and limitations of deterministic, probabilistic, and fuzzy etiology are explored. Different kinds of formal structures for the relation of causation are introduced which enable us to explicate the notion of cause on qualitative, comparative, and quantitative levels. The conceptual framework developed is an approach to a theory of causality that may be useful in etiologic research, in building nosological systems, and in differential diagnosis, therapeutic decision-making, and controlled clinical trials. The bearings of the theory are exemplified by examining the current Chlamydia pneumoniae hypothesis on the incidence of myocardial infarction. (C) 1998 Elsevier Science B.V. All rights reserved.
Intelligent Analysis and Off-Line Debugging of VLSI Device Test Programs Today‘s microelectronics researchers design VLSI devices to achieve highlydifferentiated devices, both in performance and functionality. As VLSI devices become more complex, VLSI device testingbecomes more costly and time consuming. The increasing test complexity leadsto longer device test programs development time as well as more expensivetest systems, and debugging testprograms is a great burden to the test programs development.On the other hand, there is little formal theory of debugging, and attempts to develop a methodology of debugging are rare. The aim of the investigation in this paper is to create a theory to support analysis and debugging of VLSI device test programs, and then, on the basis of this theory, design and develop an off-line debugging environment, OLDEVDTP, for the creation, analysis, checking, identifying,error location, and correction of the device test programs off-line from thetarget VLSI test system, to achieve a dramatic cost and time reduction. In the paper, fuzzy comprehensive evaluation techniques are applied to the program analysis and debugging process to reduce restrictions caused by computational complexity. Analysis, design, and implementation of OLDEVDTP are also addressed in the paper.
Soft computing based on interval valued fuzzy ANP-A novel methodology Analytic Network Process (ANP) is the multi-criteria decision making (MCDM) tool which takes into account such a complex relationship among parameters. In this paper, we develop the interval-valued fuzzy ANP (IVF-ANP) to solve MCDM problems since it allows interdependent influences specified in the model and generalizes on the supermatrix approach. Furthermore, performance rating values as well as the weights of criteria are linguistics terms which can be expressed in IVF numbers (IVFN). Moreover, we present a novel methodology proposed for solving MCDM problems. In proposed methodology by applying IVF-ANP method determined weights of criteria. Then, we appraise the performance of alternatives against criteria via linguistic variables which are expressed as triangular interval-valued fuzzy numbers. Afterward, by utilizing IVF-weights which are obtained from IVF-ANP and applying IVF-TOPSIS and IVF-VIKOR methods achieve final rank for alternatives. Additionally, to demonstrate the procedural implementation of the proposed model and its effectiveness, we apply it on a case study regarding to assessment the performance of property responsibility insurance companies.
1.24
0.12
0.12
0.04
0.002857
0.000612
0.00026
0
0
0
0
0
0
0
Modeling and Design of Adaptive Video Streaming Control Systems. Adaptive video streaming systems aim at providing the best user experience given the user device and the network available bandwidth. With this purpose, a controller selecting the video bitrate (or level) from a discrete set £ has to be designed. The control goal is to maximize the video bitrate while avoiding playback interruptions and minimizing video bitrate switches. In this paper, we propose ...
Green framework for future heterogeneous wireless networks Energy-efficient communication has sparked tremendous interest in recent years as one of the main design goals of future wireless Heterogeneous Networks (HetNets). This has resulted in paradigm shift of current operation from data oriented to energy-efficient oriented networks. In this paper, we propose a framework for green communications in wireless HetNets. This framework is cognitive in holistic sense and aims at improving energy efficiency of the whole system, not just one isolated part of the network. In particular, we propose a cyclic approach, named as energy-cognitive cycle, which extends the classic cognitive cycle and enables dynamic selection of different available strategies for reducing the energy consumption in the network while satisfying the quality of service constraints.
Live transcoding and streaming-as-a-service with MPEG-DASH Multimedia content delivery and real-time streaming over the top of the existing infrastructure is nowadays part and parcel of every media ecosystem thanks to open standards and the adoption of the Hypertext Transfer Protocol (HTTP) as its primary mean for transportation. Hardware encoder manufacturers have adopted their product lines to support the dynamic adaptive streaming over HTTP but suffer from the inflexibility to provide scalability on demand, specifically for event-based live services that are only offered for a limited period of time. The cloud computing paradigm allows for this kind of flexibility and provide the necessary elasticity in order to easily scale with the demand required for such use case scenarios. In this paper we describe bitcodin, our transcoding and streaming-as-as-ervice platform based on open standards (i.e., MPEG-DASH) which is deployed on standard cloud and content delivery infrastructures to enable high-quality streaming to heterogeneous clients. It is currently deployed for video on demand, 24/7 live, and event-based live services using bitdash, our adaptive client framework.
Joint Design of Source Rate Control and QoS-Aware Congestion Control for Video Streaming Over the Internet Multimedia streaming over the Internet has been a very challenging issue due to the dynamic uncertain nature of the channels. This paper proposes an algorithm for the joint design of source rate control and congestion control for video streaming over the Internet. With the incorporation of a virtual network buffer management mechanism (VB), the quality of service (QoS) requirements of the application can be translated into the constraints of the source rate and the sending rate. Then at the application layer, the source rate control is implemented based on the derived constraints, and at the transport layer, a QoS-aware congestion control mechanism is proposed that strives to meet the send rate constraint derived from VB, by allowing temporary violation of transport control protocol (TCP)-friendliness when necessary. Long-term TCP-friendliness, nevertheless, is preserved by introducing a rate-compensation algorithm. Simulation results show that compared with traditional source rate/congestion control algorithms, this cross-layer design approach can better support the QoS requirements of the application, and significantly improve the playback quality by reducing the overflow and underflow of the decoder buffer, and improving quality smoothness, while maintaining good long-term TCP-friendliness
Measurement of Quality of Experience of Video-on-Demand Services: A Survey Video-on-demand streaming services have gained popularity over the past few years. An increase in the speed of the access networks has also led to a larger number of users watching videos online. Online video streaming traffic is estimated to further increase from the current value of 57% to 69% by 2017, Cisco, 2014. In order to retain the existing users and attract new users, service providers attempt to satisfy the user's expectations and provide a satisfactory viewing experience. The first step toward providing a satisfactory service is to be able to quantify the users' perception of the current service level. Quality of experience (QoE) is a quality metric that provides a holistic measure of the users' perception of the quality. In this survey, we first present a tutorial overview of the popular video streaming techniques deployed for stored videos, followed by identifying various metrics that could be used to quantify the QoE for video streaming services; finally, we present a comprehensive survey of the literature on various tools and measurement methodologies that have been proposed to measure or predict the QoE of online video streaming services.
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.
Singularity detection and processing with wavelets The mathematical characterization of singularities with Lipschitz exponents is reviewed. Theorems that estimate local Lipschitz exponents of functions from the evolution across scales of their wavelet transform are reviewed. It is then proven that the local maxima of the wavelet transform modulus detect the locations of irregular structures and provide numerical procedures to compute their Lipschitz exponents. The wavelet transform of singularities with fast oscillations has a particular behavior that is studied separately. The local frequency of such oscillations is measured from the wavelet transform modulus maxima. It has been shown numerically that one- and two-dimensional signals can be reconstructed, with a good approximation, from the local maxima of their wavelet transform modulus. As an application, an algorithm is developed that removes white noises from signals by analyzing the evolution of the wavelet transform maxima across scales. In two dimensions, the wavelet transform maxima indicate the location of edges in images.<>
Cubature Kalman Filters In this paper, we present a new nonlinear filter for high-dimensional state estimation, which we have named the cubature Kalman filter (CKF). The heart of the CKF is a spherical-radial cubature rule, which makes it possible to numerically compute multivariate moment integrals encountered in the nonlinear Bayesian filter. Specifically, we derive a third-degree spherical-radial cubature rule that provides a set of cubature points scaling linearly with the state-vector dimension. The CKF may therefore provide a systematic solution for high-dimensional nonlinear filtering problems. The paper also includes the derivation of a square-root version of the CKF for improved numerical stability. The CKF is tested experimentally in two nonlinear state estimation problems. In the first problem, the proposed cubature rule is used to compute the second-order statistics of a nonlinearly transformed Gaussian random variable. The second problem addresses the use of the CKF for tracking a maneuvering aircraft. The results of both experiments demonstrate the improved performance of the CKF over conventional nonlinear filters.
Numerical Integration using Sparse Grids We present and review algorithms for the numerical integration of multivariatefunctions defined over d--dimensional cubes using several variantsof the sparse grid method first introduced by Smolyak [51]. In this approach,multivariate quadrature formulas are constructed using combinationsof tensor products of suited one--dimensional formulas. The computingcost is almost independent of the dimension of the problem if thefunction under consideration has bounded mixed derivatives. We suggest...
Stereo image quality: effects of mixed spatio-temporal resolution We explored the response of the human visual system to mixed-resolution stereo video-sequences, in which one eye view was spatially or temporally low-pass filtered. It was expected that the perceived quality, depth, and sharpness would be relatively unaffected by low-pass filtering, compared to the case where both eyes viewed a filtered image. Subjects viewed two 10-second stereo video-sequences, in which the right-eye frames were filtered vertically (V) and horizontally (H) at 1/2 H, 1/2 V, 1/4 H, 1/4 V, 1/2 H 1/2 V, 1/2 H 1/4 V, 1/4 H 1/2 V, and 1/4 H 1/4 V resolution. Temporal filtering was implemented for a subset of these conditions at 1/2 temporal resolution, or with drop-and-repeat frames. Subjects rated the overall quality, sharpness, and overall sensation of depth. It was found that spatial filtering produced acceptable results: the overall sensation of depth was unaffected by low-pass filtering, while ratings of quality and of sharpness were strongly weighted towards the eye with the greater spatial resolution. By comparison, temporal filtering produced unacceptable results: field averaging and drop-and-repeat frame conditions yielded images with poor quality and sharpness, even though perceived depth was relatively unaffected. We conclude that spatial filtering of one channel of a stereo video-sequence may be an effective means of reducing the transmission bandwidth
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
A review on the design and optimization of interval type-2 fuzzy controllers A review of the methods used in the design of interval type-2 fuzzy controllers has been considered in this work. The fundamental focus of the work is based on the basic reasons for optimizing type-2 fuzzy controllers for different areas of application. Recently, bio-inspired methods have emerged as powerful optimization algorithms for solving complex problems. In the case of designing type-2 fuzzy controllers for particular applications, the use of bio-inspired optimization methods have helped in the complex task of finding the appropriate parameter values and structure of the fuzzy systems. In this review, we consider the application of genetic algorithms, particle swarm optimization and ant colony optimization as three different paradigms that help in the design of optimal type-2 fuzzy controllers. We also mention alternative approaches to designing type-2 fuzzy controllers without optimization techniques. We also provide a comparison of the different optimization methods for the case of designing type-2 fuzzy controllers.
A fuzzy CBR technique for generating product ideas This paper presents a fuzzy CBR (case-based reasoning) technique for generating new product ideas from a product database for enhancing the functions of a given product (called the baseline product). In the database, a product is modeled by a 100-attribute vector, 87 of which are used to model the use-scenario and 13 are used to describe the manufacturing/recycling features. Based on the use-scenario attributes and their relative weights - determined by a fuzzy AHP technique, a fuzzy CBR retrieving mechanism is developed to retrieve product-ideas that tend to enhance the functions of the baseline product. Based on the manufacturing/recycling features, a fuzzy CBR mechanism is developed to screen the retrieved product ideas in order to obtain a higher ratio of valuable product ideas. Experiments indicate that the retrieving-and-filtering mechanism outperforms the prior retrieving-only mechanism in terms of generating a higher ratio of valuable product ideas.
Soft computing based on interval valued fuzzy ANP-A novel methodology Analytic Network Process (ANP) is the multi-criteria decision making (MCDM) tool which takes into account such a complex relationship among parameters. In this paper, we develop the interval-valued fuzzy ANP (IVF-ANP) to solve MCDM problems since it allows interdependent influences specified in the model and generalizes on the supermatrix approach. Furthermore, performance rating values as well as the weights of criteria are linguistics terms which can be expressed in IVF numbers (IVFN). Moreover, we present a novel methodology proposed for solving MCDM problems. In proposed methodology by applying IVF-ANP method determined weights of criteria. Then, we appraise the performance of alternatives against criteria via linguistic variables which are expressed as triangular interval-valued fuzzy numbers. Afterward, by utilizing IVF-weights which are obtained from IVF-ANP and applying IVF-TOPSIS and IVF-VIKOR methods achieve final rank for alternatives. Additionally, to demonstrate the procedural implementation of the proposed model and its effectiveness, we apply it on a case study regarding to assessment the performance of property responsibility insurance companies.
1.2
0.2
0.2
0.05
0.022222
0
0
0
0
0
0
0
0
0
RIDA: a robust information-driven data compression architecture for irregular wireless sensor networks In this paper, we propose and evaluate RIDA, a novel information-driven architecture for distributed data compression in a sensor network, allowing it to conserve energy and bandwidth and potentially enabling high-rate data sampling. The key idea is to determine the data correlation among a group of sensors based on the value of the data itself to significantly improve compression. Hence, this approach moves beyond traditional data compression schemes which rely only on spatial and temporal data correlation. A logical mapping, which assigns indices to nodes based on the data content, enables simple implementation, on nodes, of data transformation without any other information. The logical mapping approach also adapts particularly well to irregular sensor network topologies. We evaluate our architecture with both Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) on publicly available real-world data sets. Our experiments on both simulation and real data show that 30% of energy and 80-95% of the bandwidth can be saved for typical multi-hop data networks. Moreover, the original data can be retrieved after decompression with a low error of about 3%. Furthermore, we also propose a mechanism to detect and classify missing or faulty nodes, showing accuracy and recall of 95% when half of the nodes in the network are missing or faulty.
Practical data compression in wireless sensor networks: A survey Power consumption is a critical problem affecting the lifetime of wireless sensor networks. A number of techniques have been proposed to solve this issue, such as energy-efficient medium access control or routing protocols. Among those proposed techniques, the data compression scheme is one that can be used to reduce transmitted data over wireless channels. This technique leads to a reduction in the required inter-node communication, which is the main power consumer in wireless sensor networks. In this article, a comprehensive review of existing data compression approaches in wireless sensor networks is provided. First, suitable sets of criteria are defined to classify existing techniques as well as to determine what practical data compression in wireless sensor networks should be. Next, the details of each classified compression category are described. Finally, their performance, open issues, limitations and suitable applications are analyzed and compared based on the criteria of practical data compression in wireless sensor networks.
Multiresolution Spatial and Temporal Coding in a Wireless Sensor Network for Long-Term Monitoring Applications In many WSN (wireless sensor network) applications, such as [1], [2], [3], the targets are to provide long-term monitoring of environments. In such applications, energy is a primary concern because sensor nodes have to regularly report data to the sink and need to continuously work for a very long time so that users may periodically request a rough overview of the monitored environment. On the other hand, users may occasionally query more in-depth data of certain areas to analyze abnormal events. These requirements motivate us to propose a multiresolution compression and query (MRCQ) framework to support in-network data compression and data storage in WSNs from both space and time domains. Our MRCQ framework can organize sensor nodes hierarchically and establish multiresolution summaries of sensing data inside the network, through spatial and temporal compressions. In the space domain, only lower resolution summaries are sent to the sink; the other higher resolution summaries are stored in the network and can be obtained via queries. In the time domain, historical data stored in sensor nodes exhibit a finer resolution for more recent data, and a coarser resolution for older data. Our methods consider the hardware limitations of sensor nodes. So, the result is expected to save sensors' energy significantly, and thus, can support long-term monitoring WSN applications. A prototyping system is developed to verify its feasibility. Simulation results also show the efficiency of MRCQ compared to existing work.
Sparse Event Detection In Wireless Sensor Networks Using Compressive Sensing Compressive sensing is a revolutionary idea proposed recently to achieve much lower sampling rate for sparse signals. For large wireless sensor networks, the events are relatively sparse compared with the number of sources. Because of deployment cost, the number of sensors is limited, and due to energy constraint, not all the sensors are turned on all the time. In this paper, the first contribution is to formulate the problem for sparse event detection in wireless sensor networks as a compressive sensing problem. The number of (wake-up) sensors can be greatly reduced to the similar level of the number of sparse events, which is much smaller than the total number of sources. Second, we suppose the event has the binary nature, and employ the Bayesian detection using this prior information. Finally, we analyze the performance of the compressive sensing algorithms under the Gaussian noise. From the simulation results, we show that the sampling rate can reduce to 25% without sacrificing performance. With further decreasing the sampling rate, the performance is gradually reduced until 10% of sampling rate. Our proposed detection algorithm has much better performance than the L-1-magic algorithm proposed in the literature.
Distributed sparse random projections for refinable approximation Consider a large-scale wireless sensor network measuring compressible data, where n distributed data values can be well-approximated using only k « n coefficients of some known transform. We address the problem of recovering an approximation of the n data values by querying any L sensors, so that the reconstruction error is comparable to the optimal k-term approximation. To solve this problem, we present a novel distributed algorithm based on sparse random projections, which requires no global coordination or knowledge. The key idea is that the sparsity of the random projections greatly reduces the communication cost of pre-processing the data. Our algorithm allows the collector to choose the number of sensors to query according to the desired approximation error. The reconstruction quality depends only on the number of sensors queried, enabling robust refinable approximation.
Optimally Tuned Iterative Reconstruction Algorithms for Compressed Sensing We conducted an extensive computational experiment, lasting multiple CPU-years, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at sparselab.stanford.edu; they run ??out of the box?? with no user tuning: it is not necessary to select thresholds or know the likely degree of sparsity. Our class of algorithms includes iterative hard and soft thresholding with or without relaxation, as well as CoSaMP, subspace pursuit and some natural extensions. As a result, our optimally tuned algorithms dominate such proposals. Our notion of optimality is defined in terms of phase transitions, i.e., we maximize the number of nonzeros at which the algorithm can successfully operate. We show that the phase transition is a well-defined quantity with our suite of random underdetermined linear systems. Our tuning gives the highest transition possible within each class of algorithms. We verify by extensive computation the robustness of our recommendations to the amplitude distribution of the nonzero coefficients as well as the matrix ensemble defining the underdetermined system. Our findings include the following. 1) For all algorithms, the worst amplitude distribution for nonzeros is generally the constant-amplitude random-sign distribution, where all nonzeros are the same amplitude. 2) Various random matrix ensembles give the same phase transitions; random partial isometries may give different transitions and require different tuning. 3) Optimally tuned subspace pursuit dominates optimally tuned CoSaMP, particularly so when the system is almost square.
A lower estimate for entropy numbers The behaviour of the entropy numbers ek(id:lnp→lnq), 0<p<q⩽∞, is well known (up to multiplicative constants independent of n and k), except in the quasi-Banach case 0<p<1 for “medium size” k, i.e., when logn⩽k⩽n, where only an upper estimate is available so far. We close this gap by proving the lower estimate ek(id:lnp→lnq)⩾c(log(n/k+1)/k)1/p−1/q for all 0<p<q⩽∞ and logn⩽k⩽n, with some constant c>0 depending only on p.
Singularity detection and processing with wavelets The mathematical characterization of singularities with Lipschitz exponents is reviewed. Theorems that estimate local Lipschitz exponents of functions from the evolution across scales of their wavelet transform are reviewed. It is then proven that the local maxima of the wavelet transform modulus detect the locations of irregular structures and provide numerical procedures to compute their Lipschitz exponents. The wavelet transform of singularities with fast oscillations has a particular behavior that is studied separately. The local frequency of such oscillations is measured from the wavelet transform modulus maxima. It has been shown numerically that one- and two-dimensional signals can be reconstructed, with a good approximation, from the local maxima of their wavelet transform modulus. As an application, an algorithm is developed that removes white noises from signals by analyzing the evolution of the wavelet transform maxima across scales. In two dimensions, the wavelet transform maxima indicate the location of edges in images.<>
A machine learning approach to coreference resolution of noun phrases In this paper, we present a learning approach to coreference resolution of noun phrases in unrestricted text. The approach learns from a small, annotated corpus and the task includes resolving not just a certain type of noun phrase (e.g., pronouns) but rather general noun phrases. It also does not restrict the entity types of the noun phrases; that is, coreference is assigned whether they are of "organization," "person," or other types. We evaluate our approach on common data sets (namely, the MUC-6 and MUC-7 coreference corpora) and obtain encouraging results, in-dicating that on the general noun phrase coreference task, the learning approach holds promise and achieves accuracy comparable to that of nonlearning approaches. Our system is the first learning-based system that offers performance comparable to that of state-of-the-art nonlearning systems on these data sets.
The more-for-less paradox in fuzzy posynomial geometric programming The more-for-less (MFL) problem in fuzzy posynomial geometric programming (FPGP) is advanced in this paper. The research results presented here focus primarily on the nonconvex FPGP in both objective functions and constraint functions. Convexification, quasiconvex, or pseudoconvex, is extended in the sense of an MFL paradox by consolidating the necessary and sufficient conditions. Since the FPGP is equivalent to fuzzy linear programming correspondingly, there exists a solution to the FPGP. Furthermore, the duality or strong duality theorem, the equivalent condition of the MFL paradox and its condition under expansion are examined in detail. It is well known that the fundamental understanding of problems on MFL paradox is of paramount importance to applications of resource allotments and optimal resource management, and correspondingly that the information science and technology advancement play a rule to resource allotments and resource option in management problems. In fact, they are dependent and interwinded.
Deblurring from highly incomplete measurements for remote sensing When we take photos, we often get blurred pictures because of hand shake, motion, insufficient light, unsuited focal length, or other disturbances. Recently, a compressed-sensing (CS) theorem which provides a new sampling theory for data acquisition has been applied for medical and astronomic imaging. The CS makes it possible to take superresolution photos using only one or a few pixels, rather th...
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Application of FMCDM model to selecting the hub location in the marine transportation: A case study in southeastern Asia Hub location selection problems have become one of the most popular and important issues not only in the truck transportation and the air transportation, but also in the marine transportation. The main focus of this paper is on container transshipment hub locations in southeastern Asia. Transshipment is the fastest growing segment of the containerport market, resulting in significant scope to develop new transshipment terminal capacity to cater for future expected traffic flows. A shipping carrier not only calculates transport distances and operation costs, but also evaluates some qualitative conditions for existing hub locations and then selects an optimal container transshipment hub location in the region. In this paper, a fuzzy multiple criteria decision-making (FMCDM) model is proposed for evaluating and selecting the container transshipment hub port. Finally, the utilization of the proposed FMCDM model is demonstrated with a case study of hub locations in southeastern Asia. The results show that the FMCDM model proposed in this paper can be used to explain the evaluation and decision-making procedures of hub location selection well. In addition, the preferences are calculated for existing hub locations and these are then compared with a new proposed container transshipment hub location in the region, in this instance the Port of Shanghai. Furthermore, a sensitivity analysis is performed.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.112222
0.088889
0.066667
0.053333
0.01487
0.000333
0.000003
0
0
0
0
0
0
0
Uncertainty and Worst-Case Analysis in Electrical Measurements Using Polynomial Chaos Theory In this paper, the authors propose an analytical method for estimating the possible worst-case measurement due to the propagation of uncertainty. This analytical method uses polynomial chaos theory (PCT) to formally include the effects of uncertainty as it propagates through an indirect measurement. The main assumption is that an analytical model of the measurement process is available. To demonst...
Bounding the Dynamic Behavior of an Uncertain System via Polynomial Chaos-based Simulation Parametric uncertainty can represent parametric tolerance, parameter noise or parameter disturbances. The effects of these uncertainties on the time evolution of a system can be extremely significant, mostly when studying closed-loop operation of control systems. The presence of uncertainty makes the modeling process challenging, since it is impossible to express the behavior of the system with a deterministic approach. If the uncertainties can be defined in terms of probability density function, probabilistic approaches can be adopted. In many cases, the most useful aspect is the evaluation of the worst-case scenario, thus limiting the problem to the evaluation of the boundary of the set of solutions. This is particularly true for the analysis of robust stability and performance of a closed-loop system. The goal of this paper is to demonstrate how the polynomial chaos theory (PCT) can simplify the determination of the worst-case scenario, quickly providing the boundaries in time domain. The proposed approach is documented with examples and with the description of the Maple worksheet developed by the authors for the automatic processing in the PCT framework.
Automatic synthesis of uncertain models for linear circuit simulation: A polynomial chaos theory approach A generalized and automated process for the evaluation of system uncertainty using computer simulation is presented. Wiener–Askey polynomial chaos and generalized polynomial chaos expansions along with Galerkin projections, are used to project a resistive companion system representation onto a stochastic space. Modifications to the resistive companion modeling method that allow for individual models to be produced independently from one another are presented. The results of the polynomial chaos system simulation are compared to Monte Carlo simulation results from PSPICE and C++. The comparison of the simulation results from the various methods demonstrates that polynomial chaos circuit simulation is accurate and advantageous. The algorithms and processes presented in this paper are the basis for the creation of a computer-aided design (CAD) simulator for linear networks containing uncertain parameters.
A polynomial chaos approach to measurement uncertainty Measurement uncertainty is traditionally represented in the form of expanded uncertainty as defined through the Guide to the Expression of Uncertainty in Measurement (GUM). The International Organization for Standardization GUM represents uncertainty through confidence intervals based on the variances and means derived from probability density functions. A new approach to the evaluation of measure...
A multi-element generalized polynomial chaos approach to analysis of mobile robot dynamics under uncertainty The ability of mobile robots to quickly and accurately analyze their dynamics is critical to their safety and efficient operation. In field conditions, significant uncertainty is associated with terrain and/or vehicle parameter estimates, and this must be considered in an analysis of robot motion. Here a Multi-Element generalized Polynomial Chaos (MEgPC) approach is presented that explicitly considers vehicle parameter uncertainty for long term estimation of robot dynamics. It is shown to be an improvement over the generalized Askey polynomial chaos framework as well as the standard Monte Carlo scheme, and can be used for efficient, accurate prediction of robot dynamics.
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
Sets with type-2 operations The algebra of truth values of type-2 fuzzy sets consists of all mappings of the unit interval to itself, with type-2 operations that are convolutions of ordinary max and min operations. This paper is concerned with a special subalgebra of this truth value algebra, namely the set of nonzero functions with values in the two-element set {0,1}. This algebra can be identified with the set of all non-empty subsets of the unit interval, but the operations are not the usual union and intersection. We give simplified descriptions of the operations and derive the basic algebraic properties of this algebra, including the identification of its automorphism group. We also discuss some subalgebras and homomorphisms between them and look briefly at t-norms on this algebra of sets.
MIMO technologies in 3GPP LTE and LTE-advanced 3rd Generation Partnership Project (3GPP) has recently completed the specification of the Long Term Evolution (LTE) standard. Majority of the world's operators and vendors are already committed to LTE deployments and developments, making LTE the market leader in the upcoming evolution to 4G wireless communication systems. Multiple input multiple output (MIMO) technologies introduced in LTE such as spatial multiplexing, transmit diversity, and beamforming are key components for providing higher peak rate at a better system efficiency, which are essential for supporting future broadband data service over wireless links. Further extension of LTE MIMO technologies is being studied under the 3GPP study item "LTE-Advanced" to meet the requirement of IMT-Advanced set by International Telecommunication Union Radiocommunication Sector (ITU-R). In this paper, we introduce various MIMO technologies employed in LTE and provide a brief overview on the MIMO technologies currently discussed in the LTE-Advanced forum.
Multi-level Monte Carlo Finite Element method for elliptic PDEs with stochastic coefficients In Monte Carlo methods quadrupling the sample size halves the error. In simulations of stochastic partial differential equations (SPDEs), the total work is the sample size times the solution cost of an instance of the partial differential equation. A Multi-level Monte Carlo method is introduced which allows, in certain cases, to reduce the overall work to that of the discretization of one instance of the deterministic PDE. The model problem is an elliptic equation with stochastic coefficients. Multi-level Monte Carlo errors and work estimates are given both for the mean of the solutions and for higher moments. The overall complexity of computing mean fields as well as k-point correlations of the random solution is proved to be of log-linear complexity in the number of unknowns of a single Multi-level solve of the deterministic elliptic problem. Numerical examples complete the theoretical analysis.
User profiles and fuzzy logic for web retrieval issues We present a study of the role of user profiles using fuzzy logic in web retrieval processes. Flexibility for user interaction and for adaptation in profile construction becomes an important issue. We focus our study on user profiles, including creation, modification, storage, clustering and interpretation. We also consider the role of fuzzy logic and other soft computing techniques to improve user profiles. Extended profiles contain additional information related to the user that can be used to personalize and customize the retrieval process as well as the web site. Web mining processes can be carried out by means of fuzzy clustering of these extended profiles and fuzzy rule construction. Fuzzy inference can be used in order to modify queries and extract knowledge from profiles with marketing purposes within a web framework. An architecture of a portal that could support web mining technology is also presented.
A probabilistic model for multimodal hash function learning In recent years, both hashing-based similarity search and multimodal similarity search have aroused much research interest in the data mining and other communities. While hashing-based similarity search seeks to address the scalability issue, multimodal similarity search deals with applications in which data of multiple modalities are available. In this paper, our goal is to address both issues simultaneously. We propose a probabilistic model, called multimodal latent binary embedding (MLBE), to learn hash functions from multimodal data automatically. MLBE regards the binary latent factors as hash codes in a common Hamming space. Given data from multiple modalities, we devise an efficient algorithm for the learning of binary latent factors which corresponds to hash function learning. Experimental validation of MLBE has been conducted using both synthetic data and two realistic data sets. Experimental results show that MLBE compares favorably with two state-of-the-art models.
Design of interval type-2 fuzzy models through optimal granularity allocation In this paper, we offer a new design methodology of type-2 fuzzy models whose intent is to effectively exploit the uncertainty of non-numeric membership functions. A new performance index, which guides the development of the fuzzy model, is used to navigate the construction of the fuzzy model. The underlying idea is that an optimal granularity allocation throughout the membership functions used in the fuzzy model leads to the best design. In contrast to the commonly utilized criterion where one strives for the highest accuracy of the model, the proposed index is formed in such a way so that the type-2 fuzzy model produced intervals, which ''cover'' the experimental data and at the same time are made as narrow (viz. specific) as possible. Genetic algorithm is proposed to automate the design process and further improve the results by carefully exploiting the search space. Experimental results show the efficiency of the proposed design methodology.
A fuzzy CBR technique for generating product ideas This paper presents a fuzzy CBR (case-based reasoning) technique for generating new product ideas from a product database for enhancing the functions of a given product (called the baseline product). In the database, a product is modeled by a 100-attribute vector, 87 of which are used to model the use-scenario and 13 are used to describe the manufacturing/recycling features. Based on the use-scenario attributes and their relative weights - determined by a fuzzy AHP technique, a fuzzy CBR retrieving mechanism is developed to retrieve product-ideas that tend to enhance the functions of the baseline product. Based on the manufacturing/recycling features, a fuzzy CBR mechanism is developed to screen the retrieved product ideas in order to obtain a higher ratio of valuable product ideas. Experiments indicate that the retrieving-and-filtering mechanism outperforms the prior retrieving-only mechanism in terms of generating a higher ratio of valuable product ideas.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.055465
0.059979
0.05
0.029507
0.019993
0
0
0
0
0
0
0
0
0
Regression Model Based on Fuzzy Random Variables In real-world regression problems, various statistical data may be linguistically imprecise or vague. Because of such co-existence of random and fuzzy information, we can not characterize the data only by random variables. Therefore, one can consider the use of fuzzy random variables as an integral component of regression problems.The objective of this paper is to build a regression model based on fuzzy random variables. First, a general regression model for fuzzy random data is proposed. After that, using expected value operators of fuzzy random variables, an expected regression model is established. The expected regression model can be developed by converting the original problem to a task of a linear programming problem. Finally, an explanatory example is provided.
Building Confidence-Interval-Based Fuzzy Random Regression Models In real-world regression analysis, statistical data may be linguistically imprecise or vague. Given the co-existence of stochastic and fuzzy uncertainty, real data cannot be characterized by using only the formalism of random variables. In order to address regression problems in the presence of such hybrid uncertain data, fuzzy random variables are introduced in this study to serve as an integral component of regression models. A new class of fuzzy regression models that is based on fuzzy random data is built, and is called the confidence-interval-based fuzzy random regression model (CI-FRRM). First, a general fuzzy regression model for fuzzy random data is introduced. Then, using expectations and variances of fuzzy random variables, sigma-confidence intervals are constructed for fuzzy random input-output data. The CI-FRRM is established based on the sigma-confidence intervals. The proposed regression model gives rise to a nonlinear programming problem that consists of fuzzy numbers or interval numbers. Since sign changes in the fuzzy coefficients modify the entire programming structure of the solution process, the inherent dynamic nonlinearity of this optimization makes it difficult to exploit the techniques of linear programming or classical nonlinear programming. Therefore, we resort to some heuristics. Finally, an illustrative example is provided.
Overview on the development of fuzzy random variables This paper presents a backward analysis on the interpretation, modelling and impact of the concept of fuzzy random variable. After some preliminaries, the situations modelled by means of fuzzy random variables as well as the main approaches to model them are explained. We also summarize briefly some of the probabilistic studies concerning this concept as well as some statistical applications.
Generalized theory of uncertainty (GTU)-principal concepts and ideas Uncertainty is an attribute of information. The path-breaking work of Shannon has led to a universal acceptance of the thesis that information is statistical in nature. Concomitantly, existing theories of uncertainty are based on probability theory. The generalized theory of uncertainty (GTU) departs from existing theories in essential ways. First, the thesis that information is statistical in nature is replaced by a much more general thesis that information is a generalized constraint, with statistical uncertainty being a special, albeit important case. Equating information to a generalized constraint is the fundamental thesis of GTU. Second, bivalence is abandoned throughout GTU, and the foundation of GTU is shifted from bivalent logic to fuzzy logic. As a consequence, in GTU everything is or is allowed to be a matter of degree or, equivalently, fuzzy. Concomitantly, all variables are, or are allowed to be granular, with a granule being a clump of values drawn together by a generalized constraint. And third, one of the principal objectives of GTU is achievement of NL-capability, that is, the capability to operate on information described in natural language. NL-capability has high importance because much of human knowledge, including knowledge about probabilities, is described in natural language. NL-capability is the focus of attention in the present paper. The centerpiece of GTU is the concept of a generalized constraint. The concept of a generalized constraint is motivated by the fact that most real-world constraints are elastic rather than rigid, and have a complex structure even when simple in appearance. The paper concludes with examples of computation with uncertain information described in natural language.
The concept of a linguistic variable and its application to approximate reasoning—I By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. For example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e.,young, not young, very young, quite young, old, not very old and not very young, etc., rather than 20, 21,22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (L>, T(L), U,G,M) in which L is the name of the variable; T(L) is the term-set of L, that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(L); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U. The meaning of a linguistic value X is characterized by a compatibility function, c: U → [0,1], which associates with each u in U its compatibility with X. Thus, the compatibility of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compatibilities of the so-called primary terms in a composite linguistic value-e.g., young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectives and and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The concept of a linguistic variable provides a means of approximate characterization of phenomena which are too complex or too ill-defined to be amenable to description in conventional quantitative terms. In particular, treating Truth as a linguistic variable with values such as true, very true, completely true, not very true, untrue, etc., leads to what is called fuzzy logic. By providing a basis for approximate reasoning, that is, a mode of reasoning which is not exact nor very inexact, such logic may offer a more realistic framework for human reasoning than the traditional two-valued logic. It is shown that probabilities, too, can be treated as linguistic variables with values such as likely, very likely, unlikely, etc. Computation with linguistic probabilities requires the solution of nonlinear programs and leads to results which are imprecise to the same degree as the underlying probabilities. The main applications of the linguistic approach lie in the realm of humanistic systems-especially in the fields of artificial intelligence, linguistics, human decision processes, pattern recognition, psychology, law, medical diagnosis, information retrieval, economics and related areas.
Intelligent multi-criteria fuzzy group decision-making for situation assessments Organizational decisions and situation assessment are often made in groups, and decision and assessment processes involve various uncertain factors. To increase efficiently group decision-making, this study presents a new rational---political model as a systematic means of supporting group decision-making in an uncertain environment. The model takes advantage of both rational and political models and can handle inconsistent assessment, incomplete information and inaccurate opinions in deriving the best solution for the group decision under a sequential framework. The model particularly identifies three uncertain factors involved in a group decision-making process: decision makers' roles, preferences for alternatives, and judgments for assessment-criteria. Based on this model, an intelligent multi-criteria fuzzy group decision-making method is proposed to deal with the three uncertain factors described by linguistic terms. The proposed method uses general fuzzy numbers and aggregates these factors into a group satisfactory decision that is in a most acceptable degree of the group. Inference rules are particularly introduced into the method for checking the consistence of individual preferences. Finally, a real case-study on a business situation assessment is illustrated by the proposed method.
An adaptive consensus support model for group decision-making problems in a multigranular fuzzy linguistic context Different consensus models for group decision-making (GDM) problems have been proposed in the literature. However, all of them consider the consensus reaching process a rigid or inflexible one because its behavior remains fixed in all rounds of the consensus process. The aim of this paper is to improve the consensus reaching process in GDM problems defined in multigranular linguistic contexts, i.e., by using linguistic term sets with different cardinality to represent experts' preferences. To do that, we propose an adaptive consensus support system model for this type of decision-making problem, i.e., a process that adapts its behavior to the agreement achieved in each round. This adaptive model increases the convergence toward the consensus and, therefore, reduces the number of rounds to reach it.
A sequential selection process in group decision making with a linguistic assessment approach In this paper a Sequential Selection Process in Group Decision Making underlinguistic assessments is presented, where a set of linguistic preference relationsrepresents individuals preferences. A collective linguistic preference is obtained bymeans of a defined linguistic ordered weighted averaging operator whose weightsare chosen according to the concept of fuzzy majority, specified by a fuzzy linguisticquantifier. Then we define the concepts of linguistic nondominance, linguistic...
Anaphora for everyone: pronominal anaphora resoluation without a parser We present an algorithm for anaphora resolution which is a modified and extended version of that developed by (Lappin and Leass, 1994). In contrast to that work, our algorithm does not require in-depth, full, syntactic parsing of text. Instead, with minimal compromise in output quality, the modifications enable the resolution process to work from the output of a part of speech tagger, enriched only with annotations of grammatical function of lexical items in the input text stream. Evaluation of the results of our implementation demonstrates that accurate anaphora resolution can be realized within natural language processing frameworks which do not---or cannot--- employ robust and reliable parsing components.
Fading correlation and its effect on the capacity of multielement antenna systems We investigate the effects of fading correlations in multielement antenna (MEA) communication systems. Pioneering studies showed that if the fades connecting pairs of transmit and receive antenna elements are independently, identically distributed, MEAs offer a large increase in capacity compared to single-antenna systems. An MEA system can be described in terms of spatial eigenmodes, which are single-input single-output subchannels. The channel capacity of an MEA is the sum of capacities of these subchannels. We show that the fading correlation affects the MEA capacity by modifying the distributions of the gains of these subchannels. The fading correlation depends on the physical parameters of MEA and the scatterer characteristics. In this paper, to characterize the fading correlation, we employ an abstract model, which is appropriate for modeling narrow-band Rayleigh fading in fixed wireless systems
Fuzzy-Spatial SQL Current Geographic Information Systems (GISs) are inadequate for performing spatial analysis, since they force users to formulate their often vague requests by means of crisp selection conditions on spatial data. In fact, SQL extended to support spatial analysis is becoming the de facto standard for GISs; however, it does not allow the formulation of flexible queries. Based on these considerations, we propose the extension of SQL/Spatial in order to make it flexible. Flexibility is obtained by allowing the expression of linguistic predicates defining soft spatial and non-spatial selection conditions admitting degrees of satisfaction. Specifically, this paper proposes an extension of the basic SQL SELECT operator; proposes the definition of some spatial functions to compute gradual topological, distance, and directional properties of spatial objects; introduces a new operator for defining linguistic predicates over spatial properties, and reports the related formal semantics.
Yield-Aware Cache Architectures One of the major issues faced by the semiconductor industry today is that of reducing chip yields. As the process technologies have scaled to smaller feature sizes, chip yields have dropped to around 50% or less. This figure is expected to decrease even further in future technologies. To attack this growing problem, we develop four yield-aware micro architecture schemes for data caches. The first one is called yield-aware power-down (YAPD). YAPD turns off cache ways that cause delay violation and/or have excessive leakage. We also modify this approach to achieve better yields. This new method is called horizontal YAPD (H-YAPD), which turns off horizontal regions of the cache instead of ways. A third approach targets delay violation in data caches. Particularly, we develop a variable-latency cache architecture (VACA). VACA allows different load accesses to be completed with varying latencies. This is enabled by augmenting the functional units with special buffers that allow the dependants of a load operation to stall for a cycle if the load operation is delayed. As a result, if some accesses take longer than the predefined number of cycles, the execution can still be performed correctly, albeit with some performance degradation. A fourth scheme we devise is called the hybrid mechanism, which combines the YAPD and the VACA. As a result of these schemes, chips that may be tossed away due to parametric yield loss can be saved. Experimental results demonstrate that the yield losses can be reduced by 68.1% and 72.4% with YAPD and H-YAPD schemes and by 33.3% and 81.1% with VACA and Hybrid mechanisms, respectively, improving the overall yield to as much as 97.0%
Spectral Methods for Parameterized Matrix Equations. We apply polynomial approximation methods-known in the numerical PDEs context as spectral methods-to approximate the vector-valued function that satisfies a linear system of equations where the matrix and the right-hand side depend on a parameter. We derive both an interpolatory pseudospectral method and a residual-minimizing Galerkin method, and we show how each can be interpreted as solving a truncated infinite system of equations; the difference between the two methods lies in where the truncation occurs. Using classical theory, we derive asymptotic error estimates related to the region of analyticity of the solution, and we present a practical residual error estimate. We verify the results with two numerical examples.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.113679
0.088889
0.037893
0.004656
0.000583
0.000021
0.000009
0.000003
0
0
0
0
0
0
A new method for multiple attribute group decision-making with intuitionistic trapezoid fuzzy linguistic information. With respect to multi-attribute group decision-making (MAGDM) problems in which the attribute values take form of intuitionistic trapezoid fuzzy linguistic numbers, some new aggregation operators are proposed, such as intuitionistic trapezoid fuzzy linguistic weighted geometric operator, intuitionistic trapezoid fuzzy linguistic ordered weighted geometric operator, intuitionistic trapezoid fuzzy linguistic hybrid weighted geometric operator, intuitionistic trapezoid fuzzy linguistic generalized weighted averaging operator, intuitionistic trapezoid fuzzy linguistic generalized ordered weighted averaging operator and intuitionistic trapezoid fuzzy linguistic generalized hybrid weighted averaging operator are proposed at first. Then, some desirable properties of these proposed operators are discussed, including monotonicity, idempotency, commutativity and boundedness. Furthermore, based on the proposed operators, some novel methods are developed to solve MAGDM problems with intuitionistic trapezoid fuzzy linguistic information under different cases. Finally, an illustrative example of emergency response capability evaluation is provided to illustrate the applicability and effectiveness of the proposed methods.
Subjective and objective information in linguistic multi-criteria group decision making. •Linguistic aggregation operators that integrate probabilities and weighted averages.•Linguistic probabilistic weighted aggregation operators with moving averages.•Aggregation operators with quasi-arithmetic means and linguistic information.•A new approach for linguistic multi-criteria group decision making in the EU law.
Hesitant Fuzzy Power Bonferroni Means and Their Application to Multiple Attribute Decision Making As a useful generalization of fuzzy sets, the hesitant fuzzy set is designed for situations in which it is difficult to determine the membership of an element to a set because of ambiguity between a few different values. In this paper, we define the ith-order polymerization degree function and propose a new ranking method to further compare different hesitant fuzzy sets. In order to obtain much mo...
On Improving the Additive Consistency of the Fuzzy Preference Relations Based on Comparative Linguistic Expressions AbstractLinguistic fuzzy preference relations are commonly used in decision-making problems. The recent proposal of comparative linguistic expressions has been introduced to deal with the situation that decision makers hesitate among several linguistic terms to express their assessments. To apply this idea to linguistic fuzzy preference relations, we propose the linguistic fuzzy preference relations based on the comparative linguistic expressions. We then transform the linguistic fuzzy preference relations into linguistic 2-tuple fuzzy preference relations, and introduce an iterative method to measure and improve the additive consistency of the linguistic 2-tuple fuzzy preference relations. An illustrative example is finally presented to clarify the computational processes.
Topsis For Hesitant Fuzzy Linguistic Term Sets We propose a new method to aggregate the opinion of experts or decision makers on different criteria, regarding a set of alternatives, where the opinion of the experts is represented by hesitant fuzzy linguistic term sets. An illustrative example is provided to elaborate the proposed method for selection of the best alternative. (C) 2013 Wiley Periodicals, Inc.
Variations, margins, and statistics Design margining is used to account for design uncertainties in the measurement of performance, and thereby ensures that actual manufactured parts will operate in within predicted bounds. As process and environmental variations become increasingly severe and complex in nanometer process technology, design margining overheads have increased correspondingly. This paper describes the types of process and environmental variations, their impact on performance, and the traditional design margining process used to account for these uncertainties. We consider statistical timing (SSTA) in the context of its ability to reduce timing margins through more accurate modeling of variations, and quantify potential benefits of SSTA for setup and hold time margin reduction. Combining SSTA with complementary techniques for systematic variation-aware and voltage-variation-aware timing provides meaningful design margin reduction. We introduce the concept of activity based operating condition as a supporting construct for variation-aware STA flows
Estimators and tail bounds for dimension reduction in lα (0 < α ≤ 2) using stable random projections. The method of stable random projections is popular in data stream computations, data mining, information retrieval, and machine learning, for efficiently computing the lα (0 < α ≤ 2) distances using a small (memory) space, in one pass of the data. We propose algorithms based on (1) the geometric mean estimator, for all 0 <α ≤ 2, and (2) the harmonic mean estimator, only for small α (e.g., α < 0.344). Compared with the previous classical work [27], our main contributions include: • The general sample complexity bound for α ≠ 1,2. For α = 1, [27] provided a nice argument based on the inverse of Cauchy density about the median, leading to a sample complexity bound, although they did not provide the constants and their proof restricted ε to be "small enough." For general α ≠ 1, 2, however, the task becomes much more difficult. [27] provided the "conceptual promise" that the sample complexity bound similar to that for α = 1 should exist for general α, if a "non-uniform algorithm based on t-quantile" could be implemented. Such a conceptual algorithm was only for supporting the arguments in [27], not a real implementation. We consider this is one of the main problems left open in [27]. In this study, we propose a practical algorithm based on the geometric mean estimator and derive the sample complexity bound for all 0 < α ≤ 2. • The practical and optimal algorithm for α = 0+ The l0 norm is an important case. Stable random projections can provide an approximation to the l0 norm using α → 0+. We provide an algorithm based on the harmonic mean estimator, which is simple and statistically optimal. Its tail bounds are sharper than the bounds derived based on the geometric mean. We also discover a (possibly surprising) fact: in boolean data, stable random projections using α = 0+ with the harmonic mean estimator will be about twice as accurate as (l2) normal random projections. Because high-dimensional boolean data are common, we expect this fact will be practically quite useful. • The precise theoretical analysis and practical implications We provide the precise constants in the tail bounds for both the geometric mean and harmonic mean estimators. We also provide the variances (either exact or asymptotic) for the proposed estimators. These results can assist practitioners to choose sample sizes accurately.
Scale-Space and Edge Detection Using Anisotropic Diffusion A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image.
On the quasi-Monte Carlo method with Halton points for elliptic PDEs with log-normal diffusion. This article is dedicated to the computation of the moments of the solution to elliptic partial differential equations with random, log-normally distributed diffusion coefficients by the quasi-Monte Carlo method. Our main result is that the convergence rate of the quasi-Monte Carlo method based on the Halton sequence for the moment computation depends only linearly on the dimensionality of the stochastic input parameters. In particular, we attain this rather mild dependence on the stochastic dimensionality without any randomization of the quasi-Monte Carlo method under consideration. For the proof of the main result, we require related regularity estimates for the solution and its powers. These estimates are also provided here. Numerical experiments are given to validate the theoretical findings.
Recognition of shapes by attributed skeletal graphs In this paper, we propose a framework to address the problem of generic 2-D shape recognition. The aim is mainly on using the potential strength of skeleton of discrete objects in computer vision and pattern recognition where features of objects are needed for classification. We propose to represent the medial axis characteristic points as an attributed skeletal graph to model the shape. The information about the object shape and its topology is totally embedded in them and this allows the comparison of different objects by graph matching algorithms. The experimental results demonstrate the correctness in detecting its characteristic points and in computing a more regular and effective representation for a perceptual indexing. The matching process, based on a revised graduated assignment algorithm, has produced encouraging results, showing the potential of the developed method in a variety of computer vision and pattern recognition domains. The results demonstrate its robustness in the presence of scale, reflection and rotation transformations and prove the ability to handle noise and occlusions.
Fast image recovery using variable splitting and constrained optimization We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.
User impatience and network performance In this work, we analyze from passive measurements the correlations between the user-induced interruptions of TCP connections and different end-to-end performance metrics. The aim of this study is to assess the possibility for a network operator to take into account the customers' experience for network monitoring. We first observe that the usual connection-level performance metrics of the interrupted connections are not very different, and sometimes better than those of normal connections. However, the request-level performance metrics show stronger correlations between the interruption rates and the network quality-of-service. Furthermore, we show that the user impatience could also be used to characterize the relative sensitivity of data applications to various network performance metrics.
Fuzzy Power Command Enhancement in Mobile Communications Systems
On Fuzziness, Its Homeland and Its Neighbour
1.2
0.2
0.1
0.025
0.01
0
0
0
0
0
0
0
0
0
Mixed Multiscale Finite Element Methods for Stochastic Porous Media Flows In this paper, we propose a stochastic mixed multiscale finite element method. The proposed method solves the stochastic porous media flow equation on the coarse grid using a set of precomputed basis functions. The precomputed basis functions are constructed based on selected realizations of the stochastic permeability field, and furthermore the solution is projected onto the finite-dimensional space spanned by these basis functions. We employ multiscale methods using limited global information since the permeability fields do not have apparent scale separation. The proposed approach does not require any interpolation in stochastic space and can easily be coupled with interpolation-based approaches to predict the solution on the coarse grid. Numerical results are presented for permeability fields with normal and exponential variograms.
A Stochastic Mortar Mixed Finite Element Method for Flow in Porous Media with Multiple Rock Types This paper presents an efficient multiscale stochastic framework for uncertainty quantification in modeling of flow through porous media with multiple rock types. The governing equations are based on Darcy's law with nonstationary stochastic permeability represented as a sum of local Karhunen-Loève expansions. The approximation uses stochastic collocation on either a tensor product or a sparse grid, coupled with a domain decomposition algorithm known as the multiscale mortar mixed finite element method. The latter method requires solving a coarse scale mortar interface problem via an iterative procedure. The traditional implementation requires the solution of local fine scale linear systems on each iteration. We employ a recently developed modification of this method that precomputes a multiscale flux basis to avoid the need for subdomain solves on each iteration. In the stochastic setting, the basis is further reused over multiple realizations, leading to collocation algorithms that are more efficient than the traditional implementation by orders of magnitude. Error analysis and numerical experiments are presented.
Numerical Studies of Three-dimensional Stochastic Darcy's Equation and Stochastic Advection-Diffusion-Dispersion Equation Solute transport in randomly heterogeneous porous media is commonly described by stochastic flow and advection-dispersion equations with a random hydraulic conductivity field. The statistical distribution of conductivity of engineered and naturally occurring porous material can vary, depending on its origin. We describe solutions of a three-dimensional stochastic advection-dispersion equation using a probabilistic collocation method (PCM) on sparse grids for several distributions of hydraulic conductivity. Three random distributions of log hydraulic conductivity are considered: uniform, Gaussian, and truncated Gaussian (beta). Log hydraulic conductivity is represented by a Karhunen-Loève (K-L) decomposition as a second-order random process with an exponential covariance function. The convergence of PCM has been demonstrated. It appears that the accuracy in both the mean and the standard deviation of PCM solutions can be improved by using the Jacobi-chaos representing the truncated Gaussian distribution rather than the Hermite-chaos for the Gaussian distribution. The effect of type of distribution and parameters such as the variance and correlation length of log hydraulic conductivity and dispersion coefficient on leading moments of the advection velocity and solute concentration was investigated.
Parallel Domain Decomposition Methods for Stochastic Elliptic Equations We present parallel Schwarz-type domain decomposition preconditioned recycling Krylov subspace methods for the numerical solution of stochastic elliptic problems, whose coefficients are assumed to be a random field with finite variance. Karhunen-Loève (KL) expansion and double orthogonal polynomials are used to reformulate the stochastic elliptic problem into a large number of related but uncoupled deterministic equations. The key to an efficient algorithm lies in “recycling computed subspaces.” Based on a careful analysis of the KL expansion, we propose and test a grouping algorithm that tells us when to recycle and when to recompute some components of the expensive computation. We show theoretically and experimentally that the Schwarz preconditioned recycling GMRES method is optimal for the entire family of linear systems. A fully parallel implementation is provided, and scalability results are reported in the paper.
A stochastic mixed finite element heterogeneous multiscale method for flow in porous media A computational methodology is developed to efficiently perform uncertainty quantification for fluid transport in porous media in the presence of both stochastic permeability and multiple scales. In order to capture the small scale heterogeneity, a new mixed multiscale finite element method is developed within the framework of the heterogeneous multiscale method (HMM) in the spatial domain. This new method ensures both local and global mass conservation. Starting from a specified covariance function, the stochastic log-permeability is discretized in the stochastic space using a truncated Karhunen-Loeve expansion with several random variables. Due to the small correlation length of the covariance function, this often results in a high stochastic dimensionality. Therefore, a newly developed adaptive high dimensional stochastic model representation technique (HDMR) is used in the stochastic space. This results in a set of low stochastic dimensional subproblems which are efficiently solved using the adaptive sparse grid collocation method (ASGC). Numerical examples are presented for both deterministic and stochastic permeability to show the accuracy and efficiency of the developed stochastic multiscale method.
On ANOVA expansions and strategies for choosing the anchor point The classic Lebesgue ANOVA expansion offers an elegant way to represent functions that depend on a high-dimensional set of parameters and it often enables a substantial reduction in the evaluation cost of such functions once the ANOVA representation is constructed. Unfortunately, the construction of the expansion itself is expensive due to the need to evaluate high-dimensional integrals. A way around this is to consider an alternative formulation, known as the anchored ANOVA expansion. This formulation requires no integrals but has an accuracy that depends sensitively on the choice of a special parameter, known as the anchor point.
High dimensional polynomial interpolation on sparse grids We study polynomial interpolation on a d-dimensional cube, where d is large. We suggest to use the least solution at sparse grids with the extrema of the Chebyshev polynomials. The polynomial exactness of this method is almost optimal. Our error bounds show that the method is universal, i.e., almost optimal for many different function spaces. We report on numerical experiments for d = 10 using up to 652 065 interpolation points.
Multi-element probabilistic collocation method in high dimensions We combine multi-element polynomial chaos with analysis of variance (ANOVA) functional decomposition to enhance the convergence rate of polynomial chaos in high dimensions and in problems with low stochastic regularity. Specifically, we employ the multi-element probabilistic collocation method MEPCM [1] and so we refer to the new method as MEPCM-A. We investigate the dependence of the convergence of MEPCM-A on two decomposition parameters, the polynomial order @m and the effective dimension @n, with @n@?N, and N the nominal dimension. Numerical tests for multi-dimensional integration and for stochastic elliptic problems suggest that @n=@m for monotonic convergence of the method. We also employ MEPCM-A to obtain error bars for the piezometric head at the Hanford nuclear waste site under stochastic hydraulic conductivity conditions. Finally, we compare the cost of MEPCM-A against Monte Carlo in several hundred dimensions, and we find MEPCM-A to be more efficient for up to 600 dimensions for a specific multi-dimensional integration problem involving a discontinuous function.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images A full-rank matrix ${\bf A}\in \mathbb{R}^{n\times m}$ with $n Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several well-known signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable, but there is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems have energized research on such signal and image processing problems—to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical results on sparse modeling of signals and images, and recent applications in inverse problems and compression in image processing. This work lies at the intersection of signal processing and applied mathematics, and arose initially from the wavelets and harmonic analysis research communities. The aim of this paper is to introduce a few key notions and applications connected to sparsity, targeting newcomers interested in either the mathematical aspects of this area or its applications.
Random Projections of Smooth Manifolds We propose a new approach for nonadaptive dimensionality reduction of manifold-modeled data, demonstrating that a small number of random linear projections can preserve key information about a manifold-modeled signal. We center our analysis on the effect of a random linear projection operator Φ:ℝ N →ℝM , MN, on a smooth well-conditioned K-dimensional submanifold ℳ⊂ℝN . As our main theoretical contribution, we establish a sufficient number M of random projections to guarantee that, with high probability, all pairwise Euclidean and geodesic distances between points on ℳ are well preserved under the mapping Φ. Our results bear strong resemblance to the emerging theory of Compressed Sensing (CS), in which sparse signals can be recovered from small numbers of random linear measurements. As in CS, the random measurements we propose can be used to recover the original data in ℝN . Moreover, like the fundamental bound in CS, our requisite M is linear in the “information level” K and logarithmic in the ambient dimension N; we also identify a logarithmic dependence on the volume and conditioning of the manifold. In addition to recovering faithful approximations to manifold-modeled signals, however, the random projections we propose can also be used to discern key properties about the manifold. We discuss connections and contrasts with existing techniques in manifold learning, a setting where dimensionality reducing mappings are typically nonlinear and constructed adaptively from a set of sampled training data.
Proceedings of the 47th Design Automation Conference, DAC 2010, Anaheim, California, USA, July 13-18, 2010
Fuzzy relational algebra for possibility-distribution-fuzzy-relational model of fuzzy data In the real world, there exist a lot of fuzzy data which cannot or need not be precisely defined. We distinguish two types of fuzziness: one in an attribute value itself and the other in an association of them. For such fuzzy data, we propose a possibility-distribution-fuzzy-relational model, in which fuzzy data are represented by fuzzy relations whose grades of membership and attribute values are possibility distributions. In this model, the former fuzziness is represented by a possibility distribution and the latter by a grade of membership. Relational algebra for the ordinary relational database as defined by Codd includes the traditional set operations and the special relational operations. These operations are classified into the primitive operations, namely, union, difference, extended Cartesian product, selection and projection, and the additional operations, namely, intersection, join, and division. We define the relational algebra for the possibility-distribution-fuzzy-relational model of fuzzy databases.
On type-2 fuzzy sets and their t-norm operations In this paper, we discuss t-norm extension operations of general binary operation for fuzzy true values on a linearly ordered set, with a unit interval and a real number set as special cases. On the basis of it, t-norm operations of type-2 fuzzy sets and properties of type-2 fuzzy numbers are discussed.
Implementing Competitive Learning in a Quantum System Ideas from quantum computation are applied to the field of neural networks to produce competitive learning in a quantum system. The resulting quantum competitive learner has a prototype storage capacity that is exponentially greater than that of its classical counterpart. Further, empirical results from simulation of the quantum competitive learning system on real-world data sets demonstrate the quantum system's potential for excellent performance.
1.054278
0.025
0.017371
0.01375
0.008606
0.001
0.000185
0.000027
0
0
0
0
0
0
Adaptive sparse polynomial chaos expansion based on least angle regression Polynomial chaos (PC) expansions are used in stochastic finite element analysis to represent the random model response by a set of coefficients in a suitable (so-called polynomial chaos) basis. The number of terms to be computed grows dramatically with the size of the input random vector, which makes the computational cost of classical solution schemes (may it be intrusive (i.e. of Galerkin type) or non intrusive) unaffordable when the deterministic finite element model is expensive to evaluate. To address such problems, the paper describes a non intrusive method that builds a sparse PC expansion. First, an original strategy for truncating the PC expansions, based on hyperbolic index sets, is proposed. Then an adaptive algorithm based on least angle regression (LAR) is devised for automatically detecting the significant coefficients of the PC expansion. Beside the sparsity of the basis, the experimental design used at each step of the algorithm is systematically complemented in order to avoid the overfitting phenomenon. The accuracy of the PC metamodel is checked using an estimate inspired by statistical learning theory, namely the corrected leave-one-out error. As a consequence, a rather small number of PC terms are eventually retained (sparse representation), which may be obtained at a reduced computational cost compared to the classical ''full'' PC approximation. The convergence of the algorithm is shown on an analytical function. Then the method is illustrated on three stochastic finite element problems. The first model features 10 input random variables, whereas the two others involve an input random field, which is discretized into 38 and 30-500 random variables, respectively.
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods - such as first order perturbation theory or Monte Carlo sampling - Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner@?s procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15-20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.
Sequential experimental design based generalised ANOVA. Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Metamodelling with independent and dependent inputs. In the cases of computationally expensive models the metamodelling technique which maps inputs and outputs is a very useful and practical way of making computations tractable. A number of new techniques which improve the efficiency of the Random Sampling-High dimensional model representation (RS-HDMR) for models with independent and dependent input variables are presented. Two different metamodelling methods for models with dependent input variables are compared. Both techniques are based on a Quasi Monte Carlo variant of RS-HDMR. The first technique makes use of transformation of the dependent input vector into a Gaussian independent random vector and then applies the decomposition of the model using a tensored Hermite polynomial basis. The second approach uses a direct decomposition of the model function into a basis which consists of the marginal distributions of input components and their joint distribution. For both methods the copula formalism is used. Numerical tests prove that the developed methods are robust and efficient.
Sparse Pseudo Spectral Projection Methods with Directional Adaptation for Uncertainty Quantification Abstract We investigate two methods to build a polynomial approximation of a model output depending on some parameters. The two approaches are based on pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids, and aim at providing a finer control of the resolution along two distinct subsets of model parameters. The control of the error along different subsets of parameters may be needed for instance in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid PSP is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error. In addition, the global approach is better suited for generalization to more than two subsets of directions.
Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA. This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier–Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic ‘Navier–Stokes alike’ equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams–Bashforth scheme for convective term and Crank–Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin–Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.
Error Analysis of the Dynamically Orthogonal Approximation of Time Dependent Random PDEs In this work we discuss the dynamically orthogonal (DO) approximation of time dependent partial differential equations with random data. The approximate solution is expanded at each time instant on a time dependent orthonormal basis in the physical domain with a fixed and small number of terms. Dynamic equations are written for the evolution of the basis as well as the evolution of the stochastic coefficients of the expansion. We analyze the case of a linear parabolic equation with random data and derive a theoretical bound for the approximation error of the S-terms DO solution by the corresponding S-terms best approximation, i.e., the truncated S-terms Karhunen-Loeve expansion at each time instant. The bound is applicable on the largest time interval in which the best S-terms approximation is continuously time differentiable. Properties of the DO approximations are analyzed on simple cases of deterministic equations with random initial data. Numerical tests are presented that confirm the theoretical bound and show potentials and limitations of the proposed approach.
Compressed sensing with cross validation Compressed sensing (CS) decoding algorithms can efficiently recover an N-dimensional real-valued vector x to within a factor of its best k-term approximation by taking m = O(k log N/k) measurements y = Φx. If the sparsity or approximate sparsity level of x were known, then this theoretical guarantee would imply quality assurance of the resulting CS estimate. However, because the underlying sparsity of the signal x is unknown, the quality of a CS estimate x using m measurements is not assured. It is nevertheless shown in this paper that sharp bounds on the error ∥x - x∥l2N can be achieved with almost no effort. More precisely, suppose that a maximum number of measurements m is preimposed. One can reserve 10 logp of these m measurements and compute a sequence of possible estimates (xj)jp=1 to x from the m - 10logp remaining measurements; the errors ∥x - xj∥l2N for j = 1,...,p can then be bounded with high probability. As a consequence, numerical upper and lower bounds on the error between x and the best k-term approximation to x can be estimated for p values of k with almost no cost. This observation has applications outside CS as well.
On the construction and analysis of stochastic models: characterization and propagation of the errors associated with limited data This paper investigates the predictive accuracy of stochastic models. In particular, a formulation is presented for the impact of data limitations associated with the calibration of parameters for these models, on their overall predictive accuracy. In the course of this development, a new method for the characterization of stochastic processes from corresponding experimental observations is obtained. Specifically, polynomial chaos representations of these processes are estimated that are consistent, in some useful sense, with the data. The estimated polynomial chaos coefficients are themselves characterized as random variables with known probability density function, thus permitting the analysis of the dependence of their values on further experimental evidence. Moreover, the error in these coefficients, associated with limited data, is propagated through a physical system characterized by a stochastic partial differential equation (SPDE). This formalism permits the rational allocation of resources in view of studying the possibility of validating a particular predictive model. A Bayesian inference scheme is relied upon as the logic for parameter estimation, with its computational engine provided by a Metropolis-Hastings Markov chain Monte Carlo procedure.
Restricted Isometries for Partial Random Circulant Matrices In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the sth-order restricted isometry constant is small when the number m of samples satisfies m≳(slogn)3/2, where n is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling.
Statistical Analysis of On-Chip Power Delivery Networks Considering Lognormal Leakage Current Variations With Spatial Correlation As the technology scales into 90 nm and below, process-induced variations become more pronounced. In this paper, we propose an efficient stochastic method for analyzing the voltage drop variations of on-chip power grid networks, considering log-normal leakage current variations with spatial correlation. The new analysis is based on the Hermite polynomial chaos (PC) representation of random processes. Different from the existing Hermite PC based method for power grid analysis (Ghanta et al., 2005), which models all the random variations as Gaussian processes without considering spatial correlation, the new method consider both wire variations and subthreshold leakage current variations, which are modeled as log-normal distribution random variables, on the power grid voltage variations. To consider the spatial correlation, we apply orthogonal decomposition to map the correlated random variables into independent variables. Our experiment results show that the new method is more accurate than the Gaussian-only Hermite PC method using the Taylor expansion method for analyzing leakage current variations. It is two orders of magnitude faster than the Monte Carlo method with small variance errors. We also show that the spatial correlation may lead to large errors if not being considered in the statistical analysis.
Hierarchical Classifiers for Complex Spatio-temporal Concepts The aim of the paper is to present rough set methods of constructing hierarchical classifiers for approximation of complex concepts. Classifiers are constructed on the basis of experimental data sets and domain knowledge that are mainly represented by concept ontology. Information systems, decision tables and decision rules are basic tools for modeling and constructing such classifiers. The general methodology presented here is applied to approximate spatial complex concepts and spatio-temporal complex concepts defined for (un)structured complex objects, to identify the behavioral patterns of complex objects, and to the automated behavior planning for such objects when the states of objects are represented by spatio-temporal concepts requiring approximation. We describe the results of computer experiments performed on real-life data sets from a vehicular traffic simulator and on medical data concerning the infant respiratory failure.
Aggregating constraint satisfaction degrees expressed by possibilistic truth values In information systems, one often has to deal with constraints in order to compel the semantics and integrity of the stored information or to express some querying criteria. Hereby, different constraints can be of different importance. A method to aggregate the information about the satisfaction of a finite number of constraints for a given data instance is presented. Central to the proposed method is the use of extended possibilistic truth values (to express the degree of satisfaction of a constraint) and the use of residual implicators and residual coimplicators (to model the impact and relevance of a constraint). The proposed method can be applied to any constraint-based system. A database application is discussed and illustrated.
Signal probability based statistical timing analysis VLSI timing analysis and power estimation target the same circuit switching activity. Power estimation techniques are categorized as (1) static, (2) statistical, and (3) simulation and testing based methods. Similarly, statistical timing analysis methods are in three counterpart categories: (1) statistical static timing analysis, (2) probabilistic technique based statistical timing analysis, and (3) Monte Carlo (SPICE) simulation and testing. Leveraging with existing power estimation techniques, I propose signal probability (i.e., the logic one occurrence probability on a net) based statistical timing analysis, for improved accuracy and reduced pessimism over the existing statistical static timing analysis methods, and improved efficiency over Monte Carlo (SPICE) simulation. Experimental results on ISCAS benchmark circuits show that SPSTA computes the means (standard deviations) of the maximum signal arrival times within 5.6% (7.7%), SSTA within 16.5% (46.9%), and STA within 83.0% (132.4%) in average of Monte Carlo simulation results, respectively. More significant accuracy improvements are expected in the presence of increased process and environmental variations.
1.003971
0.004259
0.004259
0.00413
0.003704
0.003704
0.001852
0.000759
0.000206
0.000011
0
0
0
0
Lama: A Linguistic Aggregation Of Majority Additive Operator A problem that we had encountered in the aggregation process, is how to aggregate the elements that have cardinality > 1. The purpose of this article is to present a new aggregation operator of linguistic labels that uses the cardinality of these elements, the linguistic aggregation of majority additive (LAMA) operator. We also present an extension of the LAMA operator under the two-tuple fuzzy linguistic representation model. (C) 2003 Wiley Periodicals, Inc.
Constructing Linguistic Versions For The Multicriteria Decision Support Systems Preference Ranking Organization Method For Enrichment Evaluation I And Ii The environmental impact assessment (EIA) is a real problem of multicriteria decision making (MCDM) where information, as much quantitative as qualitative, coexists. The traditional methods of MCDM developed for the EIA discriminates in favor of quantitative information at the expense of qualitative information, because we are unable to integrate this latter information inside their procedure. In this study, we present two new multicriteria decision fuzzy methods called fuzzy in preference ranking organization method for enrichment evaluation (FPROMETHEE2T) I and II, which are able to integrate inside their procedure quantiative and qualitative information. This has been performed by applying a new linguistic representation model based on two tuples. These methods, although they have been developed for EIA problems, can be applied to all sorts of decision-making problems, with information of any nature. Therefore, the application of this method to real problems will lead to better results in MCDM. The main interest of our investigation group currently is to develop a set of different multicriteria decision fuzzy methods to be integrated inside a software program that works as a multicriteria decision aid. (C) 2003 Wiley Periodicals, Inc.
A computing with words based approach to multicriteria energy planning Exploitation of new and innovative energy alternatives is a key means towards a sustainable energy system. This paper proposes a linguistic energy planning model with computation solely on words as well as considering the policy-maker's preference information. To do so, a probabilistic approach is first proposed to derive the underlying semantic overlapping of linguistic labels from their associated fuzzy membership functions. Second, a satisfactory-oriented choice function is proposed to incorporate the policy-maker's preference information. Third, our model is extended to multicriteria case with linguistic importance weights. One example, borrowed from the literature, is used to show the effectiveness and advantages of our model.
Applying a direct multi-granularity linguistic and strategy-oriented aggregation approach on the assessment of supply performance Supply performance has the active continuity behaviors, which covers the past, present and future of time horizons. Thus, supply performance possesses distinct uncertainty on individual behavior, which is inadequate to assess with quantification. This study utilizes the linguistic variable instead of numerical variable to offset the inaccuracy on quantification, and employs the fitting linguistic scale in accordance with the characteristic of supply behavior to enhance the applicability. Furthermore, the uniformity is introduced to transform the linguistic information uniformly from different scales. Finally, the linguistic ordered weighted averaging operator with maximal entropy applies in direct to aggregate the combination of linguistic information and product strategy to ensure the assessment results meeting the enterprise requirements, and then to emulate mental decision making in humans by the linguistic manner.
A satisfactory-oriented approach to multiexpert decision-making with linguistic assessments. This paper proposes a multiexpert decision-making (MEDM) method with linguistic assessments, making use of the notion of random preferences and a so-called satisfactory principle. It is well known that decision-making problems that manage preferences from different experts follow a common resolution scheme composed of two phases: an aggregation phase that combines the individual preferences to obtain a collective preference value for each alternative; and an exploitation phase that orders the collective preferences according to a given criterion, to select the best alternative/s. For our method, instead of using an aggregation operator to obtain a collective preference value, a random preference is defined for each alternative in the aggregation phase. Then, based on a satisfactory principle defined in this paper, that says that it is perfectly satisfactory to select an alternative as the best if its performance is as at least "good" as all the others under the same evaluation scheme, we propose a linguistic choice function to establish a rank ordering among the alternatives. Moreover, we also discuss how this linguistic decision rule can be applied to the MEDM problem in multigranular linguistic contexts. Two application examples taken from the literature are used to illuminate the proposed techniques.
A Linguistic Multigranular Sensory Evaluation Model For Olive Oil Evaluation is a process that analyzes elements in order to achieve different objectives such as quality inspection, marketing and other fields in industrial companies. This paper focuses on sensory evaluation where the evaluated items are assessed by a panel of experts according to the knowledge acquired via human senses. In these evaluation processes the information provided by the experts implies uncertainty, vagueness and imprecision. The use of the Fuzzy Linguistic Approach (32) has provided successful results modelling such a type of information. In sensory evaluation it may happen that the panel of experts have more or less degree knowledge of about the evaluated items or indicators. So, it seems suitable that each expert could express their preferences in different linguistic term sets based on their own knowledge. In this paper, we present a sensory evaluation model that manages multigranular linguistic evaluation framework based on a decision analysis scheme. This model will be applied to the sensory evaluation process of Olive Oil.
Fuzzy Linguistic PERT A model for Program Evaluation and Review Technique (PERT) under fuzzy linguistic contexts is introduced. In this fuzzy linguistic PERT network model, each activity duration is represented by a fuzzy linguistic description. Aggregation and comparison of the estimated linguistic expectations of activity durations are manipulated by the techniques of computing with words (CW). To provide suitable contexts for this purpose, we first introduce several variations of basic linguistic labels of a linguistic variable, such as weighted linguistic labels, generalized linguistic labels and weighted generalized linguistic labels etc., and then based on the notion of canonical characteristic value (CCV) function of a linguistic variable, we develop some related CW techniques for aggregation and comparison of these linguistic labels. Afterward, using a computing technique of linguistic probability introduced by Zadeh and based on the new developed CW techniques for weighted generalized linguistic labels, we investigate the associated linguistic expectation PERT network of a fuzzy linguistic PERT network. Also, throughout the paper, several examples are used to illustrate related notions and applications
Fuzzy Grey Gm(1,1) Model Under Fuzzy System Grey GM(1, 1) forecasting model is a kind of short-term forecasting method which has been successfully applied in management and engineering problems with as little as four data. However, when a new system is constructed, the system is uncertain and variable so that the collected data is usually of fuzzy type, which could not be applied to grey GM(1, 1) model forecast. In order to cope with such problem, the fuzzy system derived from collected data is considered by the fuzzy grey controlled variable to derive a fuzzy grey GM(1, 1) model to forecast the extrapolative values under the fuzzy system. Finally, an example is described for illustration.
Hesitant fuzzy entropy and cross-entropy and their use in multiattribute decision-making We introduce the concepts of entropy and cross-entropy for hesitant fuzzy information, and discuss their desirable properties. Several measure formulas are further developed, and the relationships among the proposed entropy, cross-entropy, and similarity measures are analyzed, from which we can find that three measures are interchangeable under certain conditions. Then we develop two multiattribute decision-making methods in which the attribute values are given in the form of hesitant fuzzy sets reflecting humans' hesitant thinking comprehensively. In one method, the weight vector is determined by the hesitant fuzzy entropy measure, and the optimal alternative is obtained by comparing the hesitant fuzzy cross-entropies between the alternatives and the ideal solutions; in another method, the weight vector is derived from the maximizing deviation method and the optimal alternative is obtained by using the TOPSIS method. An actual example is provided to compare our methods with the existing ones. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc.
A fuzzy approach to select the location of the distribution center The location selection of distribution center (DC) is one of the most important decision issues for logistics managers. Owing to vague concept frequently represented in decision data, a new multiple criteria decision-making method is proposed to solve the distribution center location selection problem under fuzzy environment. In the proposed method, the ratings of each alternative and the weight of each criterion are described by linguistic variables which can be expressed in triangular fuzzy numbers. The final evaluation value of each DC location is also expressed in a triangular fuzzy number. By calculating the difference of final evaluation value between each pair of DC locations, a fuzzy preference relation matrix is constructed to represent the intensity of the preferences of one plant location over another. And then, a stepwise ranking procedure is proposed to determine the ranking order of all candidate locations. Finally, a numerical example is solved to illustrate the procedure of the proposed method at the end of this paper.
Fuzzy Reasoning Based On The Extension Principle According to the operation of decomposition (also known as representation theorem) (Negoita CV, Ralescu, DA. Kybernetes 1975;4:169-174) in fuzzy set theory, the whole fuzziness of an object can be characterized by a sequence of local crisp properties of that object. Hence, any fuzzy reasoning could also be implemented by using a similar idea, i.e., a sequence of precise reasoning. More precisely, we could translate a fuzzy relation "lf A then B" of the Generalized Modus Ponens Rule (the most common and widely used interpretation of a fuzzy rule, A, B, are fuzzy sets in a universe of discourse X, and of discourse Y, respectively) into a corresponding precise relation between a subset of P(X) and a subset of P(Y), and then extend this corresponding precise relation to two kinds of transformations between all L-type fuzzy subsets of X and those of Y by using Zadeh's extension principle, where L denotes a complete lattice. In this way, we provide an alternative approach to the existing compositional rule of inference, which performs fuzzy reasoning based on the extension principle. The approach does not depend on the choice of fuzzy implication operator nor on the choice of a t-norm. The detailed reasoning methods, applied in particular to the Generalized Modus Ponens and the Generalized Modus Tollens, are established and their properties are further investigated in this paper. (C) 2001 John Wiley & Sons, Inc.
Systemunterstützt individualisierte Kundenansprache in der Mehrkanalwelt der Finanzdienstleistungsbranche - Repräsentation der Einstellungen von Kunden in einem Kundenmodell
Conformal Maps to Multiply Slit Domains and Applications By exploiting conformal maps to vertically slit regions in the complex plane, a recently developed rational spectral method [T. W. Tee and L. N. Trefethen, SIAM J. Sci. Comput., 28 (2006), pp. 1798-1811] is able to solve PDEs with interior layer-like behavior using significantly fewer collocation points than traditional spectral methods. The conformal maps are chosen to “enlarge the region of analyticity” in the solution: an idea which can be extended to other numerical methods based upon global polynomial interpolation. Here we show how such maps can be rapidly computed in both periodic and nonperiodic geometries and apply them to some challenging differential equations.
Thermal switching error versus delay tradeoffs in clocked QCA circuits The quantum-dot cellular automata (QCA) model offers a novel nano-domain computing architecture by mapping the intended logic onto the lowest energy configuration of a collection of QCA cells, each with two possible ground states. A four-phased clocking scheme has been suggested to keep the computations at the ground state throughout the circuit. This clocking scheme, however, induces latency or delay in the transmission of information from input to output. In this paper, we study the interplay of computing error behavior with delay or latency of computation induced by the clocking scheme. Computing errors in QCA circuits can arise due to the failure of the clocking scheme to switch portions of the circuit to the ground state with change in input. Some of these non-ground states will result in output errors and some will not. The larger the size of each clocking zone, i.e., the greater the number of cells in each zone, the more the probability of computing errors. However, larger clocking zones imply faster propagation of information from input to output, i.e., reduced delay. Current QCA simulators compute just the ground state configuration of a QCA arrangement. In this paper, we offer an efficient method to compute the N-lowest energy modes of a clocked QCA circuit. We model the QCA cell arrangement in each zone using a graph-based probabilistic model, which is then transformed into a Markov tree structure defined over subsets of QCA cells. This tree structure allows us to compute the N-lowest energy configurations in an efficient manner by local message passing. We analyze the complexity of the model and show it to be polynomial in terms of the number of cells, assuming a finite neighborhood of influence for each QCA cell, which is usually the case. The overall low-energy spectrum of multiple clocking zones is constructed by concatenating the low-energy spectra of the individual clocking zones. We demonstrate how the model can be used to study the tradeoff betwee- - n switching errors and clocking zones.
1.029551
0.031276
0.031276
0.01498
0.012727
0.010026
0.005227
0.00163
0.000171
0.000041
0.000001
0
0
0
Priority-based Media Delivery using SVC with RTP and HTTP streaming Media delivery, especially video delivery over mobile channels may be affected by transmission bitrate variations or temporary link interruptions caused by changes in the channel conditions or the wireless interface. In this paper, we present the use of Priority-based Media Delivery (PMD) for Scalable Video Coding (SVC) to overcome link interruptions and channel bitrate reductions in mobile networks by performing a transmission scheduling algorithm that prioritizes media data according to its importance. The proposed approach comprises a priority-based media pre-buffer to overcome periods under reduced connectivity. The PMD algorithm aims to use the same transmission bitrate and overall buffer size as the traditional streaming approach, yet is more likely to overcome interruptions and reduced bitrate periods. PMD achieves longer continuous playback than the traditional approach, avoiding disruptions in the video playout and therefore improving the video playback quality. We analyze the use of SVC with PMD in the traditional RTP streaming and in the adaptive HTTP streaming context. We show benefits of using SVC in terms of received quality during interruption and re-buffering time, i.e. the time required to fill a desired pre-buffer at the receiver. We present a quality optimization approach for PMD and show results for different interruption/bitrate-reduction scenarios.
A reliable decentralized Peer-to-Peer Video-on-Demand system using helpers. We propose a decentralized Peer-to-Peer (P2P) Videoon-Demand (VoD) system. The traditional data center architecture is eliminated and is replaced by a large set of distributed, dynamic and individually unreliable helpers. The system leverages the strength of numbers to effect reliable cooperative content distribution, removing the drawbacks of conventional data center architectures including complexity of maintenance, high power consumption and lack of scalability. In the proposed VoD system, users and helper "servelets" cooperate in a P2P manner to deliver the video stream. Helpers are preloaded with only a small fraction of parity coded video data packets, and form into swarms each serving partial video content. The total number of helpers is optimized to guarantee high quality of service. In cases of helper churn, the helper network is also able to regenerate itself by users and helpers working cooperatively to repair the lost data, which yields a highly reliable system. Analysis and simulation results corroborate the feasibility and effectiveness of the proposed architecture.
MulTFRC: providing weighted fairness for multimediaapplications (and others too!) When data transfers to or from a host happen in parallel, users do not always consider them to have the same importance. Ideally, a transport protocol should therefore allow its users to manipulate the fairness among flows in an almost arbitrary fashion. Since data transfers can also include real-time media streams which need to keep delay | and hence buffers | small, the protocol should also have a smooth sending rate. In an effort to satisfy the above requirements, we present MulTFRC, a congestion control mechanism which is based on the TCP-friendly Rate Control (TFRC) protocol. It emulates the behavior of a number of TFRC flows while maintaining a smooth sending rate. Our simulations and a real-life test demonstrate that MulTFRC performs significantly better than its competitors, potentially making it applicable in a broader range of settings than what TFRC is normally associated with.
Effects Of Mgs Fragmentation, Slice Mode And Extraction Strategies On The Performance Of Svc With Medium-Grained Scalability This paper presents a comparison of a wide set of MGS fragmentation configurations of SVC in terms of their PSNR performance, with the slice mode on or off, using multiple extraction methods. We also propose a priority-based hierarchical extraction method which outperforms other extraction schemes for most MGS configurations. Experimental results show that splitting the MGS layer into more than five fragments, when the slice mode is on, may result in noticeable decrease in the average PSNR. It is also observed that for videos with large key frame enhancement NAL units, MGS fragmentation and/or slice mode have positive impact on the PSNR of the extracted video at low bitrates. While using slice mode without MGS fragmentation may improve the PSNR performance at low rates, it may result in uneven video quality within frames due to varying quality of slices. Therefore, we recommend combined use of up to five MGS fragments and slice mode, especially for low bitrate video applications.
Joint Texture And Depth Map Video Coding Based On The Scalable Extension Of H.264/Avc Depth-Image-Based Rendering (DIBR) is widely used for view synthesis in 3D video applications. Compared with traditional 2D video applications, both the texture video and its associated depth map are required for transmission in a communication system that supports DIBR. To efficiently utilize limited bandwidth, coding algorithms, e.g. the Advanced Video Coding (H.264/AVC) standard, can be adopted to compress the depth map using the 4:0:0 chroma sampling format. However, when the correlation between texture video and depth map is exploited, the compression efficiency may be improved compared with encoding them independently using H.264/AVC. A new encoder algorithm which employs Scalable Video Coding (SVC), the scalable extension of H.264/AVC, to compress the texture video and its associated depth map is proposed in this paper. Experimental results show that the proposed algorithm can provide up to 0.97 dB gain for the coded depth maps, compared with the simulcast scheme, wherein texture video and depth map are coded independently by H.264/AVC.
Video Transport over Heterogeneous Networks Using SCTP and DCCP As the internet continues to grow and mature, transmission of multimedia content is expected to increase and comprise a large portion of overall data traffic. The internet is becoming increasingly heterogeneous with the advent and growth of diverse wireless access networks such as WiFi, 3G Cellular and WiMax. The provision of quality of service (QoS) for multimedia transport such as video traffic over such heterogeneous networks is complex and challenging. ne quality of video transport depends on many factors; among the more important are network condition and transport protocol. Traditional transport protocols such as UDP/TCP lack the functional requirements to meet the QoS requirements of today's multimedia applications. Therefore, a number of improved transport protocols are being developed. SCTP and DCCP fall into this category. In this paper, our focus has been on evaluating SCTP and DCCP performance for MPEG4 video transport over heterogeneous (wired cum wireless) networks. The performance metrics used for this evaluation include throughput, delay and jitter. We also evaluated these measures for UDP in order to have a basis for comparison. Extensive simulations have been performed using a network simulator for video downloading and uploading. In this scenario, DCCP achieves higher throughput, with less delay and jitter than SCTP and UDP. Based on the results obtained in this study, we find that DCCP can better meet the QoS requirements for the transport of video streaming traffic.
State of the Art in Stereoscopic and Autostereoscopic Displays Underlying principles of stereoscopic direct-view displays, binocular head-mounted displays, and autostereoscopic direct-view displays are explained and some early work as well as the state of the art in those technologies are reviewed. Stereoscopic displays require eyewear and can be categorized based on the multiplexing scheme as: 1) color multiplexed (old technology but there are some recent developments; low-quality due to color reproduction and crosstalk issues; simple and does not require additional electronics hardware); 2) polarization multiplexed (requires polarized light output and polarization-based passive eyewear; high-resolution and high-quality displays available); and 3) time multiplexed (requires faster display hardware and active glasses synchronized with the display; high-resolution commercial products available). Binocular head-mounted displays can readily provide 3-D, virtual images, immersive experience, and more possibilities for interactive displays. However, the bulk of the optics, matching of the left and right ocular images and obtaining a large field of view make the designs quite challenging. Some of the recent developments using unconventional optical relays allow for thin form factors and open up new possibilities. Autostereoscopic displays are very attractive as they do not require any eyewear. There are many possibilities in this category including: two-view (the simplest implementations are with a parallax barrier or a lenticular screen), multiview, head tracked (requires active optics to redirect the rays to a moving viewer), and super multiview (potentially can solve the accommodation-convergence mismatch problem). Earlier 3-D booms did not last long mainly due to the unavailability of enabling technologies and the content. Current developments in the hardware technologies provide a renewed interest in 3-D displays both from the consumers and the display manufacturers, which is evidenced by the recent commercial products and new r esearch results in this field.
Quantification of YouTube QoE via Crowdsourcing This paper addresses the challenge of assessing and modeling Quality of Experience (QoE) for online video services that are based on TCP-streaming. We present a dedicated QoE model for You Tube that takes into account the key influence factors (such as stalling events caused by network bottlenecks) that shape quality perception of this service. As second contribution, we propose a generic subjective QoE assessment methodology for multimedia applications (like online video) that is based on crowd sourcing - a highly cost-efficient, fast and flexible way of conducting user experiments. We demonstrate how our approach successfully leverages the inherent strengths of crowd sourcing while addressing critical aspects such as the reliability of the experimental data obtained. Our results suggest that, crowd sourcing is a highly effective QoE assessment method not only for online video, but also for a wide range of other current and future Internet applications.
Techniques for measuring quality of experience Quality of Experience (QoE) relates to how users perceive the quality of an application. To capture such a subjective measure, either by subjective tests or via objective tools, is an art on its own. Given the importance of measuring users’ satisfaction to service providers, research on QoE took flight in recent years. In this paper we present an overview of various techniques for measuring QoE, thereby mostly focusing on freely available tools and methodologies.
Quality of experience management in mobile cellular networks: key issues and design challenges. Telecom operators have recently faced the need for a radical shift from technical quality requirements to customer experience guarantees. This trend has emerged due to the constantly increasing amount of mobile devices and applications and the explosion of overall traffic demand, forming a new era: “the rise of the consumer”. New terms have been coined in order to quantify, manage, and improve the...
Endurance enhancement of flash-memory storage systems: an efficient static wear leveling design This work is motivated by the strong demand of reliability enhancement over flash memory. Our objective is to improve the endurance of flash memory with limited overhead and without many modifications to popular implementation designs, such as Flash Translation Layer protocol (FTL) and NAND Flash Translation Layer protocol (NFTL). A static wear leveling mechanism is proposed with limited memory-space requirements and an efficient implementation. The propreties of the mechanism are then explored with various implementation considerations. Through a series of experiments based on a realistic trace, we show that the endurance of FTL and NFTL could be significantly improved with limited system overheads.
Semantic constraints for membership function optimization The optimization of fuzzy systems using bio-inspired strategies, such as neural network learning rules or evolutionary optimization techniques, is becoming more and more popular. In general, fuzzy systems optimized in such a way cannot provide a linguistic interpretation, preventing us from using one of their most interesting and useful features. This paper addresses this difficulty and points out a set of constraints that when used within an optimization scheme obviate the subjective task of interpreting membership functions. To achieve this a comprehensive set of semantic properties that membership functions should have is postulated and discussed. These properties are translated in terms of nonlinear constraints that are coded within a given optimization scheme, such as backpropagation. Implementation issues and one example illustrating the importance of the proposed constraints are included
Manifold models for signals and images This article proposes a new class of models for natural signals and images. These models constrain the set of patches extracted from the data to analyze to be close to a low-dimensional manifold. This manifold structure is detailed for various ensembles suitable for natural signals, images and textures modeling. These manifolds provide a low-dimensional parameterization of the local geometry of these datasets. These manifold models can be used to regularize inverse problems in signal and image processing. The restored signal is represented as a smooth curve or surface traced on the manifold that matches the forward measurements. A manifold pursuit algorithm computes iteratively a solution of the manifold regularization problem. Numerical simulations on inpainting and compressive sensing inversion show that manifolds models bring an improvement for the recovery of data with geometrical features.
Heart rate and blood pressure estimation from compressively sensed photoplethysmograph. In this paper we consider the problem of low power SpO2 sensors for acquiring Photoplethysmograph (PPG) signals. Most of the power in SpO2 sensors goes to lighting red and infra-red LEDs. We use compressive sensing to lower the amount of time LEDs are lit, thereby reducing the signal acquisition power. We observe power savings by a factor that is comparable to the sampling rate. At the receiver, we reconstruct the signal with sufficient integrity for a given task. Here we consider the tasks of heart rate (HR) and blood pressure (BP) estimation. For BP estimation we use ECG signals along with the reconstructed PPG waveform. We show that the reconstruction quality can be improved at the cost of increasing compressed sensing bandwidth and receiver complexity for a given task. We present HR and BP estimation results using the MIMIC database.
1.080278
0.050388
0.050388
0.025278
0.025202
0.012742
0.006299
0.000158
0.000041
0
0
0
0
0
Linear programming in the semi-streaming model with application to the maximum matching problem In this paper we study linear-programming based approaches to the maximum matching problem in the semi-streaming model. In this model edges are presented sequentially, possibly in an adversarial order, and we are only allowed to use a small space. The allowed space is near linear in the number of vertices (and sublinear in the number of edges) of the input graph. The semi-streaming model is relevant in the context of processing of very large graphs. In recent years, there have been several new and exciting results in the semi-streaming model. However broad techniques such as linear programming have not been adapted to this model. In this paper we present several techniques to adapt and optimize linear-programming based approaches in the semi-streaming model. We use the maximum matching problem as a foil to demonstrate the effectiveness of adapting such tools in this model. As a consequence we improve almost all previous results on the semi-streaming maximum matching problem. We also prove new results on interesting variants.
Maximum degree and fractional matchings in uniform hypergraphs Let ℋ be a family ofr-subsets of a finite setX. SetD(ℋ)= $$\mathop {\max }\limits_{x \in X} $$ |{E:x∈E∈ℋ}|, (maximum degree). We say that ℋ is intersecting if for anyH,H′ ∈ ℋ we haveH ∩H′ ≠ 0. In this case, obviously,D(ℋ)≧|ℋ|/r. According to a well-known conjectureD(ℋ)≧|ℋ|/(r−1+1/r). We prove a slightly stronger result. Let ℋ be anr-uniform, intersecting hypergraph. Then either it is a projective plane of orderr−1, consequentlyD(ℋ)=|ℋ|/(r−1+1/r), orD(ℋ)≧|ℋ|/(r−1). This is a corollary to a more general theorem on not necessarily intersecting hypergraphs.
Near-Optimal Sparse Recovery in the L1 Norm We consider the *approximate sparse recovery problem*, where the goal is to (approximately) recover a high-dimensional vector x from Rn from its lower-dimensional *sketch* Ax from Rm.Specifically, we focus on the sparse recovery problem in the L1 norm: for a parameter k, given the sketch Ax, compute an approximation x' of x such that the L1 approximation error | |x-x'| | is close to minimum of | |x-x*| | over all vectors x* with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years.Many solutions to this problem have been discovered, achieving different trade-offs between various attributes, such as the sketch length, encoding and recovery times.In this paper we provide a sparse recovery scheme which achieves close to optimal performance on virtually all attributes. In particular, this is the first recovery scheme that guarantees k log(n/k) sketch length, and near-linear n log (n/k) recovery time *simultaneously*. It also features low encoding and update times, and is noise-resilient.
Approximate Sparse Recovery: Optimizing Time and Measurements A Euclidean approximate sparse recovery system consists of parameters $k,N$, an $m$-by-$N$ measurement matrix, $\bm{\Phi}$, and a decoding algorithm, $\mathcal{D}$. Given a vector, ${\mathbf x}$, the system approximates ${\mathbf x}$ by $\widehat {\mathbf x}=\mathcal{D}(\bm{\Phi} {\mathbf x})$, which must satisfy $|\widehat {\mathbf x} - {\mathbf x}|_2\le C |{\mathbf x} - {\mathbf x}_k|_2$, where ${\mathbf x}_k$ denotes the optimal $k$-term approximation to ${\mathbf x}$. (The output $\widehat{\mathbf x}$ may have more than $k$ terms.) For each vector ${\mathbf x}$, the system must succeed with probability at least 3/4. Among the goals in designing such systems are minimizing the number $m$ of measurements and the runtime of the decoding algorithm, $\mathcal{D}$. In this paper, we give a system with $m=O(k \log(N/k))$ measurements—matching a lower bound, up to a constant factor—and decoding time $k\log^{O(1)} N$, matching a lower bound up to a polylog$(N)$ factor. We also consider the encode time (i.e., the time to multiply $\bm{\Phi}$ by $x$), the time to update measurements (i.e., the time to multiply $\bm{\Phi}$ by a 1-sparse $x$), and the robustness and stability of the algorithm (resilience to noise before and after the measurements). Our encode and update times are optimal up to $\log(k)$ factors. The columns of $\bm{\Phi}$ have at most $O(\log^2(k)\log(N/k))$ nonzeros, each of which can be found in constant time. Our full result, a fully polynomial randomized approximation scheme, is as follows. If ${\mathbf x}={\mathbf x}_k+\nu_1$, where $\nu_1$ and $\nu_2$ (below) are arbitrary vectors (regarded as noise), then setting $\widehat {\mathbf x} = \mathcal{D}(\Phi {\mathbf x} + \nu_2)$, and for properly normalized $\bm{\Phi}$, we get $\left|{\mathbf x} - \widehat {\mathbf x}\right|_2^2 \le (1+\epsilon)\left|\nu_1\right|_2^2 + \epsilon\left|\nu_2\right|_2^2$ using $O((k/\epsilon)\log(N/k))$ measurements and $(k/\epsilon)\log^{O(1)}(N)$ time for decoding.
Normal hypergraphs and the perfect graph conjecture A hypergraph is called normal if the chromatic index of any partial hypergraph H ′ of it coincides with the maximum valency in H ′ . It is proved that a hypergraph is normal iff the maximum number of disjoint hyperedges coincides with the minimum number of vertices representing the hyperedges in each partial hypergraph of it. This theorem implies the following conjecture of Berge: The complement of a perfect graph is perfect. A new proof is given for a related theorem of Berge and Las Vergnas. Finally, the results are applied on a problem of integer valued linear programming, slightly sharpening some results of Fulkerson.
Combining geometry and combinatorics: a unified approach to sparse signal recovery Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix � and then uses linear programming,to decode information about x from �x. The com- binatorial approach constructs � and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of high-quality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p � 1, and then show that unbalanced expanders are essentially equivalent to RIP-p matrices. From known deterministic constructions for such matrices, we obtain new deterministic mea- surement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance.
Random Projections of Smooth Manifolds We propose a new approach for nonadaptive dimensionality reduction of manifold-modeled data, demonstrating that a small number of random linear projections can preserve key information about a manifold-modeled signal. We center our analysis on the effect of a random linear projection operator Φ:ℝ N →ℝM , MN, on a smooth well-conditioned K-dimensional submanifold ℳ⊂ℝN . As our main theoretical contribution, we establish a sufficient number M of random projections to guarantee that, with high probability, all pairwise Euclidean and geodesic distances between points on ℳ are well preserved under the mapping Φ. Our results bear strong resemblance to the emerging theory of Compressed Sensing (CS), in which sparse signals can be recovered from small numbers of random linear measurements. As in CS, the random measurements we propose can be used to recover the original data in ℝN . Moreover, like the fundamental bound in CS, our requisite M is linear in the “information level” K and logarithmic in the ambient dimension N; we also identify a logarithmic dependence on the volume and conditioning of the manifold. In addition to recovering faithful approximations to manifold-modeled signals, however, the random projections we propose can also be used to discern key properties about the manifold. We discuss connections and contrasts with existing techniques in manifold learning, a setting where dimensionality reducing mappings are typically nonlinear and constructed adaptively from a set of sampled training data.
Compressed Sensing and Redundant Dictionaries This paper extends the concept of compressed sensing to signals that are not sparse in an orthonormal basis but rather in a redundant dictionary. It is shown that a matrix, which is a composition of a random matrix of certain type and a deterministic dictionary, has small restricted isometry constants. Thus, signals that are sparse with respect to the dictionary can be recovered via basis pursuit (BP) from a small number of random measurements. Further, thresholding is investigated as recovery algorithm for compressed sensing, and conditions are provided that guarantee reconstruction with high probability. The different schemes are compared by numerical experiments.
Fuzzy set methods for qualitative and natural language oriented simulation The author discusses the approach of using fuzzy set theory to create a formal way of viewing the qualitative simulation of models whose states, inputs, outputs, and parameters are uncertain. Simulation was performed using detailed and accurate models, and it was shown how input and output trajectories could reflect linguistic (or qualitative) changes in a system. Uncertain variables are encoded using triangular fuzzy numbers, and three distinct fuzzy simulation approaches (Monte Carlo, correlated and uncorrelated) are defined. The methods discussed are also valid for discrete event simulation; experiments have been performed on the fuzzy simulation of a single server queuing model. In addition, an existing C-based simulation toolkit, SimPack, was augmented to include the capabilities for modeling using fuzzy arithmetic and linguistic association, and a C++ class definition was coded for fuzzy number types
Preference Modelling ABSTRACT This paper provides the reader with a presentation of preference modelling fundamental notions as well as some recent results in this field. Preference modelling is an inevitable step in a variety of fields: economy, sociology, psychology, mathematical programming, even medicine, archaeology, and obviously decision analysis. Our notation and some basic definitions, such as those of binary relation, properties and ordered sets, are presented at the beginning of the paper. We start by discussing different reasons for constructing a model or preference. We then go through a number,of issues that influence the construction of preference models. Different formalisations besides classical logic such as fuzzy sets and non-classical logics become,necessary. We then present different types of preference structures reflecting the behavior of a decision-maker: classical, extended and valued ones. It is relevant to have a numerical representation of preferences: functional representations, value functions. The concepts of thresholds and minimal representation are also introduced in this section. In section 7, we briefly explore the concept of deontic logic (logic of preference) and other formalisms associated with "compact representation of preferences" introduced for spe-
Compressive Wave Computation This paper presents a method for computing the solution to the time-dependent wave equation from the knowledge of a largely incomplete set of eigenfunctions of the Helmholtz operator, chosen at random. While a linear superposition of eigenfunctions can fail to properly synthesize the solution if a single term is missing, it is shown that solving a sparsity-promoting ℓ 1 minimization problem can vastly enhance the quality of recovery. This phenomenon may be seen as “compressive sampling in the Helmholtz domain.” An error bound is formulated for the one-dimensional wave equation with coefficients of small bounded variation. Under suitable assumptions, it is shown that the number of eigenfunctions needed to evolve a sparse wavefield defined on N points, accurately with very high probability, is bounded by $$C(\eta) \cdot\log N \cdot\log\log N,$$ where C(η) is related to the desired accuracy η and can be made to grow at a much slower rate than N when the solution is sparse. To the authors’ knowledge, the partial differential equation estimates that underlie this result are new and may be of independent mathematical interest. They include an L 1 estimate for the wave equation, an L ∞−L 2 estimate of the extension of eigenfunctions, and a bound for eigenvalue gaps in Sturm–Liouville problems. In practice, the compressive strategy is highly parallelizable, and may eventually lead to memory savings for certain inverse problems involving the wave equation. Numerical experiments illustrate these properties in one spatial dimension.
A general framework for accurate statistical timing analysis considering correlations The impact of parameter variations on timing due to process and environmental variations has become significant in recent years. With each new technology node this variability is becoming more prominent. In this work, we present a general statistical timing analysis (STA) framework that captures spatial correlations between gate delays. The technique presented does not make any assumption about the distributions of the parameter variations, gate delay and arrival times. The authors proposed a Taylor-series expansion based polynomial representation of gate delays and arrival times which is able to effectively capture the non-linear dependencies that arise due to increasing parameter variations. In order to reduce the computational complexity introduced due to polynomial modeling during STA, an efficient linear-modeling driven polynomial STA scheme was proposed. On an average the degree-2 polynomial scheme had a 7.3 × speedup as compared to Monte Carlo with 0.049 units of rms error with respect to Monte Carlo. The technique is generic and could be applied to arbitrary variations in the underlying parameters.
Process variability-aware transient fault modeling and analysis Due to reduction in device feature size and supply voltage, the sensitivity of digital systems to transient faults is increasing dramatically. As technology scales further, the increase in transistor integration capacity also leads to the increase in process and environmental variations. Despite these difficulties, it is expected that systems remain reliable while delivering the required performance. Reliability and variability are emerging as new design challenges, thus pointing to the importance of modeling and analysis of transient faults and variation sources for the purpose of guiding the design process. This work presents a symbolic approach to modeling the effect of transient faults in digital circuits in the presence of variability due to process manufacturing. The results show that using a nominal case and not including variability effects, can underestimate the SER by 5% for the 50% yield point and by 10% for the 90% yield point.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.115789
0.002657
0.000308
0.000204
0.000089
0.000003
0
0
0
0
0
0
0
0
Numerical Studies of Three-dimensional Stochastic Darcy's Equation and Stochastic Advection-Diffusion-Dispersion Equation Solute transport in randomly heterogeneous porous media is commonly described by stochastic flow and advection-dispersion equations with a random hydraulic conductivity field. The statistical distribution of conductivity of engineered and naturally occurring porous material can vary, depending on its origin. We describe solutions of a three-dimensional stochastic advection-dispersion equation using a probabilistic collocation method (PCM) on sparse grids for several distributions of hydraulic conductivity. Three random distributions of log hydraulic conductivity are considered: uniform, Gaussian, and truncated Gaussian (beta). Log hydraulic conductivity is represented by a Karhunen-Loève (K-L) decomposition as a second-order random process with an exponential covariance function. The convergence of PCM has been demonstrated. It appears that the accuracy in both the mean and the standard deviation of PCM solutions can be improved by using the Jacobi-chaos representing the truncated Gaussian distribution rather than the Hermite-chaos for the Gaussian distribution. The effect of type of distribution and parameters such as the variance and correlation length of log hydraulic conductivity and dispersion coefficient on leading moments of the advection velocity and solute concentration was investigated.
A Hybrid HDMR for Mixed Multiscale Finite Element Methods with Application to Flows in Random Porous Media. Stochastic modeling has become a popular approach to quantifying uncertainty in flows through heterogeneous porous media. In this approach the uncertainty in the heterogeneous structure of material properties is often parametrized by a high-dimensional random variable, leading to a family of deterministic models. The numerical treatment of this stochastic model becomes very challenging as the dimension of the parameter space increases. To efficiently tackle the high-dimensionality, we propose a hybrid high-dimensional model representation (HDMR) technique, through which the high-dimensional stochastic model is decomposed into a moderate-dimensional stochastic model, in the most active random subspace, and a few one-dimensional stochastic models. The derived low-dimensional stochastic models are solved by incorporating the sparse-grid stochastic collocation method with the proposed hybrid HDMR. In addition, the properties of porous media, such as permeability, often display heterogeneous structure across multiple spatial scales. To treat this heterogeneity we use a mixed multiscale finite element method (MMsFEM). To capture the nonlocal spatial features (i.e., channelized structures) of the porous media and the important effects of random variables, we can hierarchically incorporate the global information individually from each of the random parameters. This significantly enhances the accuracy of the multiscale simulation. Thus, the synergy of the hybrid HDMR and the MMsFEM reduces the dimension of the flow model in both the stochastic and physical spaces, and hence significantly decreases the computational complexity. We analyze the proposed hybrid HDMR technique and the derived stochastic MMsFEM. Numerical experiments are carried out for two-phase flows in random porous media to demonstrate the efficiency and accuracy of the proposed hybrid HDMR with MMsFEM.
A Stochastic Mortar Mixed Finite Element Method for Flow in Porous Media with Multiple Rock Types This paper presents an efficient multiscale stochastic framework for uncertainty quantification in modeling of flow through porous media with multiple rock types. The governing equations are based on Darcy's law with nonstationary stochastic permeability represented as a sum of local Karhunen-Loève expansions. The approximation uses stochastic collocation on either a tensor product or a sparse grid, coupled with a domain decomposition algorithm known as the multiscale mortar mixed finite element method. The latter method requires solving a coarse scale mortar interface problem via an iterative procedure. The traditional implementation requires the solution of local fine scale linear systems on each iteration. We employ a recently developed modification of this method that precomputes a multiscale flux basis to avoid the need for subdomain solves on each iteration. In the stochastic setting, the basis is further reused over multiple realizations, leading to collocation algorithms that are more efficient than the traditional implementation by orders of magnitude. Error analysis and numerical experiments are presented.
A Resourceful Splitting Technique with Applications to Deterministic and Stochastic Multiscale Finite Element Methods. In this paper we use a splitting technique to develop new multiscale basis functions for the multiscale finite element method (MsFEM). The multiscale basis functions are iteratively generated using a Green's kernel. The Green's kernel is based on the first differential operator of the splitting. The proposed MsFEM is applied to deterministic elliptic equations and stochastic elliptic equations, and we show that the proposed MsFEM can considerably reduce the dimension of the random parameter space for stochastic problems. By combining the method with sparse grid collocation methods, the need for a prohibitive number of deterministic solves is alleviated. We rigorously analyze the convergence of the proposed method for both the deterministic and stochastic elliptic equations. Computational complexity discussions are also offered to supplement the convergence analysis. A number of numerical results are presented to confirm the theoretical findings.
A Two-Scale Nonperturbative Approach to Uncertainty Analysis of Diffusion in Random Composites Many physical systems, such as natural porous media, are highly heterogeneous and characterized by parameters that are uncertain due to the lack of sufficient data. This uncertainty (randomness) occurs on a multiplicity of scales. We focus on random composites with the two dominant scales of uncertainty: large-scale uncertainty in the spatial arrangement of materials and small-scale uncertainty in the parameters within each material. We propose an approach that combines random domain decompositions and polynomial chaos expansions to account for the large and small scales of uncertainty, respectively. We present a general framework and use one-dimensional diffusion to demonstrate that our combined approach provides robust, nonperturbative approximations for the statistics of system states.
An adaptive hierarchical sparse grid collocation algorithm for the solution of stochastic differential equations In recent years, there has been a growing interest in analyzing and quantifying the effects of random inputs in the solution of ordinary/partial differential equations. To this end, the spectral stochastic finite element method (SSFEM) is the most popular method due to its fast convergence rate. Recently, the stochastic sparse grid collocation method has emerged as an attractive alternative to SSFEM. It approximates the solution in the stochastic space using Lagrange polynomial interpolation. The collocation method requires only repetitive calls to an existing deterministic solver, similar to the Monte Carlo method. However, both the SSFEM and current sparse grid collocation methods utilize global polynomials in the stochastic space. Thus when there are steep gradients or finite discontinuities in the stochastic space, these methods converge very slowly or even fail to converge. In this paper, we develop an adaptive sparse grid collocation strategy using piecewise multi-linear hierarchical basis functions. Hierarchical surplus is used as an error indicator to automatically detect the discontinuity region in the stochastic space and adaptively refine the collocation points in this region. Numerical examples, especially for problems related to long-term integration and stochastic discontinuity, are presented. Comparisons with Monte Carlo and multi-element based random domain decomposition methods are also given to show the efficiency and accuracy of the proposed method.
On Discrete Least-Squares Projection in Unbounded Domain with Random Evaluations and its Application to Parametric Uncertainty Quantification. This work is concerned with approximating multivariate functions in an unbounded domain by using a discrete least-squares projection with random point evaluations. Particular attention is given to functions with random Gaussian or gamma parameters. We first demonstrate that the traditional Hermite (Laguerre) polynomials chaos expansion suffers from the instability in the sense that an unfeasible number of points, which is relevant to the dimension of the approximation space, is needed to guarantee the stability in the least-squares framework. We then propose to use the Hermite/Laguerre functions (rather than polynomials) as bases in the expansion. The corresponding design points are obtained by mapping the uniformly distributed random points in bounded intervals to the unbounded domain, which involved a mapping parameter L. By using the Hermite/Laguerre functions and a proper mapping parameter, the stability can be significantly improved even if the number of design points scales linearly (up to a logarithmic factor) with the dimension of the approximation space. Apart from the stability, another important issue is the rate of convergence. To speed up the convergence, an effective scaling factor is introduced, and a principle for choosing quasi-optimal scaling factor is discussed. Applications to parametric uncertainty quantification are illustrated by considering a random ODE model together with an elliptic problem with lognormal random input.
Neural networks and approximation theory
When are quasi-Monte Carlo algorithms efficient for high dimensional integrals? Recently quasi-Monte Carlo algorithms have been successfully used for multivariateintegration of high dimension d, and were significantly more efficient than Monte Carloalgorithms. The existing theory of the worst case error bounds of quasi-Monte Carloalgorithms does not explain this phenomenon.This paper presents a partial answer to why quasi-Monte Carlo algorithms can workwell for arbitrarily large d. It is done by identifying classes of functions for which theeffect of the dimension d...
A multiscale framework for Compressive Sensing of video Compressive Sensing (CS) allows the highly efficient acquisition of many signals that could be difficult to capture or encode using conventional methods. From a relatively small number of random measurements, a high-dimensional signal can be recovered if it has a sparse or near-sparse representation in a basis known to the decoder. In this paper, we consider the application of CS to video signals in order to lessen the sensing and compression burdens in single- and multi-camera imaging systems. In standard video compression, motion compensation and estimation techniques have led to improved sparse representations that are more easily compressible; we adapt these techniques for the problem of CS recovery. Using a coarse-to-fine reconstruction algorithm, we alternate between the tasks of motion estimation and motion-compensated wavelet-domain signal recovery. We demonstrate that our algorithm allows the recovery of video sequences from fewer measurements than either frame-by-frame or inter-frame difference recovery methods.
A new linguistic computational model based on discrete fuzzy numbers for computing with words In recent years, several different linguistic computational models for dealing with linguistic information in processes of computing with words have been proposed. However, until now all of them rely on the special semantics of the linguistic terms, usually fuzzy numbers in the unit interval, and the linguistic aggregation operators are based on aggregation operators in [0,1]. In this paper, a linguistic computational model based on discrete fuzzy numbers whose support is a subset of consecutive natural numbers is presented ensuring the accuracy and consistency of the model. In this framework, no underlying membership functions are needed and several aggregation operators defined on the set of all discrete fuzzy numbers are presented. These aggregation operators are constructed from aggregation operators defined on a finite chain in accordance with the granularity of the linguistic term set. Finally, an example of a multi-expert decision-making problem in a hierarchical multi-granular linguistic context is given to illustrate the applicability of the proposed method and its advantages.
Application of a self tuner using fuzzy control technique A self-tuning expert fuzzy controller has been developed and applied in real time to a process control problem. As in other expert systems, the knowledge base consists of rules describing the control law in terms of the process error and the resulting control action. Conditions and conclusions of each rule are fuzzy variables which are described through their membership curves. The inference engine used is the backward chaining process of the Prolog language. To implement the self-tuning property, the membership curve of the controller output has been changed according to an error based performance index. A control supervisor makes this tuning decision as a function of past or predicted future set-point errors of the control system. To verify the viability of this fuzzy controller, it has been applied to control the speed of a DC motor operating under different loading conditions. The paper also discusses the stability problems associated with this control scheme.
Testability-Driven Statistical Path Selection In the face of large-scale process variations, statistical timing methodology has advanced significantly over the last few years, and statistical path selection takes advantage of it in at-speed testing. In deterministic path selection, the separation of path selection and test generation is known to require time consuming iteration between the two processes. This paper shows that in statistical path selection, this is not only the case, but also the quality of results can be severely degraded even after the iteration. To deal with this issue, we consider testability in the first place by integrating a satisfiability (SAT) solver, and this necessitates a new statistical path selection method. We integrate the SAT solver in a novel way that leverages the conflict analysis of modern SAT solvers, which provides more than 4X speedup without special optimizations of the SAT solver for this particular application. Our proposed method is based on a generalized path criticality metric whose properties allow efficient pruning. Our experimental results show that the proposed method achieves 47% better quality of results on average, and up to 361X speedup compared to statistical path selection followed by test generation.
Overview of HEVC High-Level Syntax and Reference Picture Management The increasing proportion of video traffic in telecommunication networks puts an emphasis on efficient video compression technology. High Efficiency Video Coding (HEVC) is the forthcoming video coding standard that provides substantial bit rate reductions compared to its predecessors. In the HEVC standardization process, technologies such as picture partitioning, reference picture management, and parameter sets are categorized as “high-level syntax.” The design of the high-level syntax impacts the interface to systems and error resilience, and provides new functionalities. This paper presents an overview of the HEVC high-level syntax, including network abstraction layer unit headers, parameter sets, picture partitioning schemes, reference picture management, and supplemental enhancement information messages.
1.068376
0.066667
0.033333
0.022222
0.010305
0.001728
0.000476
0.0001
0.000012
0
0
0
0
0
Uncertainty quantification for integrated circuits: Stochastic spectral methods Due to significant manufacturing process variations, the performance of integrated circuits (ICs) has become increasingly uncertain. Such uncertainties must be carefully quantified with efficient stochastic circuit simulators. This paper discusses the recent advances of stochastic spectral circuit simulators based on generalized polynomial chaos (gPC). Such techniques can handle both Gaussian and non-Gaussian random parameters, showing remarkable speedup over Monte Carlo for circuits with a small or medium number of parameters. We focus on the recently developed stochastic testing and the application of conventional stochastic Galerkin and stochastic collocation schemes to nonlinear circuit problems. The uncertainty quantification algorithms for static, transient and periodic steady-state simulations are presented along with some practical simulation results. Some open problems in this field are discussed.
Joint sizing and adaptive independent gate control for FinFET circuits operating in multiple voltage regimes using the logical effort method FinFET has been proposed as an alternative for bulk CMOS in current and future technology nodes due to more effective channel control, reduced random dopant fluctuation, high ON/OFF current ratio, lower energy consumption, etc. Key characteristics of FinFET operating in the sub/near-threshold region are very different from those in the strong-inversion region. This paper first introduces an analytical transregional FinFET model with high accuracy in both sub- and near-threshold regimes. Next, the paper extends the well-known and widely-adopted logical effort delay calculation and optimization method to FinFET circuits operating in multiple voltage (sub/near/super-threshold) regimes. More specifically, a joint optimization of gate sizing and adaptive independent gate control is presented and solved in order to minimize the delay of FinFET circuits operating in multiple voltage regimes. Experimental results on a 32nm Predictive Technology Model for FinFET demonstrate the effectiveness of the proposed logical effort-based delay optimization framework.
Fast and accurate statistical characterization of standard cell libraries With devices entering the nanometer scale process-induced variations, intrinsic variations and reliability issues impose new challenges for the electronic design automation industry. Design automation tools must keep the pace of technology and keep predicting accurately and efficiently the high-level design metrics such as delay and power. Although it is the most time consuming, Monte Carlo is still the simplest and most employed technique for simulating the impact of process variability at circuit level. This work addresses the problem of efficient alternatives for Monte Carlo for modeling circuit characteristics under statistical variability. This work employs the error propagation technique and Response Surface Methodology for substituting Monte Carlo simulations for library characterization.
Nonparametric multivariate density estimation: a comparative study The paper algorithmically and empirically studies two major types of nonparametric multivariate density estimation techniques, where no assumption is made about the data being drawn from any of known parametric families of distribution. The first type is the popular kernel method (and several of its variants) which uses locally tuned radial basis (e.g., Gaussian) functions to interpolate the multidimensional density; the second type is based on an exploratory projection pursuit technique which interprets the multidimensional density through the construction of several 1D densities along highly “interesting” projections of multidimensional data. Performance evaluations using training data from mixture Gaussian and mixture Cauchy densities are presented. The results show that the curse of dimensionality and the sensitivity of control parameters have a much more adverse impact on the kernel density estimators than on the projection pursuit density estimators
Identification of PARAFAC-Volterra cubic models using an Alternating Recursive Least Squares algorithm A broad class of nonlinear systems can be modelled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filters structure. This paper is concerned with the problem of identification of third-order Volterra kernels. A tensorial decomposition called PARAFAC is used to represent such a kernel. A new algorithm called the Alternating Recursive Least Squares (ARLS) algorithm is applied to identify this decomposition for estimating the Volterra kernels of cubic systems. This method significantly reduces the computational complexity of Volterra kernel estimation. Simulation results show the ability of the proposed method to achieve a good identification and an important complexity reduction, i.e. representation of Volterra cubic kernels with few parameters.
Remembrance of Transistors Past: Compact Model Parameter Extraction Using Bayesian Inference and Incomplete New Measurements In this paper, we propose a novel MOSFET parameter extraction method to enable early technology evaluation. The distinguishing feature of the proposed method is that it enables the extraction of an entire set of MOSFET model parameters using limited and incomplete IV measurements from on-chip monitor circuits. An important step in this method is the use of maximum-a-posteriori estimation where past measurements of transistors from various technologies are used to learn a prior distribution and its uncertainty matrix for the parameters of the target technology. The framework then utilizes Bayesian inference to facilitate extraction using a very small set of additional measurements. The proposed method is validated using various past technologies and post-silicon measurements for a commercial 28-nm process. The proposed extraction could also be used to characterize the statistical variations of MOSFETs with the significant benefit that some constraints required by the backward propagation of variance (BPV) method are relaxed.
A Constructive Algorithm for Decomposing a Tensor into a Finite Sum of Orthonormal Rank-1 Terms. We propose a constructive algorithm that decomposes an arbitrary real tensor into a finite sum of orthonormal rank-1 outer products. The algorithm, called TTr1SVD, works by converting the tensor into a tensor-train rank-1 (TTr1) series via the singular value decomposition (SVD). TTr1SVD naturally generalizes the SVD to the tensor regime with properties such as uniqueness for a fixed order of indices, orthogonal rank-1 outer product terms, and easy truncation error quantification. Using an outer product column table it also allows, for the first time, a complete characterization of all tensors orthogonal with the original tensor. Incidentally, this leads to a strikingly simple constructive proof showing that the maximum rank of a real 2 x 2 x 2 tensor over the real field is 3. We also derive a conversion of the TTr1 decomposition into a Tucker decomposition with a sparse core tensor. Numerical examples illustrate each of the favorable properties of the TTr1 decomposition.
Dimension-wise integration of high-dimensional functions with applications to finance We present a new general class of methods for the computation of high-dimensional integrals. The quadrature schemes result by truncation and discretization of the anchored-ANOVA decomposition. They are designed to exploit low effective dimensions and include sparse grid methods as special case. To derive bounds for the resulting modelling and discretization errors, we introduce effective dimensions for the anchored-ANOVA decomposition. We show that the new methods can be applied in locally adaptive and dimension-adaptive ways and demonstrate their efficiency by numerical experiments with high-dimensional integrals from finance.
Stochastic integral equation solver for efficient variation-aware interconnect extraction In this paper we present an efficient algorithm for extracting the complete statistical distribution of the input impedance of interconnect structures in the presence of a large number of random geometrical variations. The main contribution in this paper is the development of a new algorithm, which combines both Neumann expansion and Hermite expansion, to accurately and efficiently solve stochastic linear system of equations. The second contribution is a new theorem to efficiently obtain the coefficients of the Hermite expansion while computing only low order integrals. We establish the accuracy of the proposed algorithm by solving stochastic linear systems resulting from the discretization of the stochastic volume integral equation and comparing our results to those obtained from other techniques available in the literature, such as Monte Carlo and stochastic finite element analysis. We further prove the computational efficiency of our algorithm by solving large problems that are not solvable using the current state of the art.
Mismatch analysis and direct yield optimization by spec-wise linearization and feasibility-guided search We present a new method for mismatch analysis and automatic yield optimization of analog integrated circuits with respect to global, local and operational tolerances. Effectiveness and efficiency of yield estimation and optimization are guaranteed by consideration of feasibility regions and by performance linearization at worst-case points. The proposed methods were successfully applied to two example circuits for an industrial fabrication process.
Fast and Accurate DPPM Computation Using Model Based Filtering Defective Parts Per Million (DPPM) is an important quality metric that indicates the ratio of defective devices shipped to the customers. It is necessary to estimate and minimize DPPM in order to meet the desired level of quality. However, DPPM estimation requires statistical simulations, which are computationally costly if traditional methods are used. In this work, we propose an efficient DPPM estimation method for analog circuits that greatly reduces the computational burden. We employ a model based approach to selectively simulate only consequential samples in DPPM estimation. We include methods to mitigate the effect of model imperfection and robust model fitting to guarantee a consistent and efficient estimation. Experimental results show that the proposed method achieves 10xto 25x reduction in the number of simulations for an RF receiver front-end circuit.
On the fractional covering number of hypergraphs Thefractional covering number r* of a hypergraphH (V, E) is defined to be the minimum
Sparse representation and learning in visual recognition: Theory and applications Sparse representation and learning has been widely used in computational intelligence, machine learning, computer vision and pattern recognition, etc. Mathematically, solving sparse representation and learning involves seeking the sparsest linear combination of basis functions from an overcomplete dictionary. A rational behind this is the sparse connectivity between nodes in human brain. This paper presents a survey of some recent work on sparse representation, learning and modeling with emphasis on visual recognition. It covers both the theory and application aspects. We first review the sparse representation and learning theory including general sparse representation, structured sparse representation, high-dimensional nonlinear learning, Bayesian compressed sensing, sparse subspace learning, non-negative sparse representation, robust sparse representation, and efficient sparse representation. We then introduce the applications of sparse theory to various visual recognition tasks, including feature representation and selection, dictionary learning, Sparsity Induced Similarity (SIS) measures, sparse coding based classification frameworks, and sparsity-related topics.
A robust periodic arnoldi shooting algorithm for efficient analysis of large-scale RF/MM ICs The verification of large radio-frequency/millimeter-wave (RF/MM) integrated circuits (ICs) has regained attention for high-performance designs beyond 90nm and 60GHz. The traditional time-domain verification by standard Krylov-subspace based shooting method might not be able to deal with newly increased verification complexity. The numerical algorithms with small computational cost yet superior convergence are highly desired to extend designers' creativity to probe those extremely challenging designs of RF/MM ICs. This paper presents a new shooting algorithm for periodic RF/MM-IC systems. Utilizing a periodic structure of the state matrix, a periodic Arnoldi shooting algorithm is developed to exploit the structured Krylov-subspace. This leads to an improved efficiency and convergence. Results from several industrial examples show that the proposed periodic Arnoldi shooting method, called PAS, is 1000 times faster than the direct-LU and the explicit GMRES methods. Moreover, when compared to the existing industrial standard, a matrix-free GMRES with non-structured Krylov-subspace, the new PAS method reduces iteration number and runtime by 3 times with the same accuracy.
1.043126
0.04448
0.04448
0.043788
0.042862
0.025818
0.015074
0.006085
0.000645
0.000067
0.000001
0
0
0
Block-Based Compressed Sensing of Images and Video A number of techniques for the compressed sensing of imagery are surveyed. Various imaging media are considered, including still images, motion video, as well as multiview image sets and multiview video. A particular emphasis is placed on block-based compressed sensing due to its advantages in terms of both lightweight reconstruction complexity as well as a reduced memory burden for the random-projection measurement operator. For multiple-image scenarios, including video and multiview imagery, motion and disparity compensation is employed to exploit frame-to-frame redundancies due to object motion and parallax, resulting in residual frames which are more compressible and thus more easily reconstructed from compressed-sensing measurements. Extensive experimental comparisons evaluate various prominent reconstruction algorithms for still-image, motion-video, and multiview scenarios in terms of both reconstruction quality as well as computational complexity.
Improved total variation minimization method for compressive sensing by intra-prediction Total variation (TV) minimization algorithms are often used to recover sparse signals or images in the compressive sensing (CS). But the use of TV solvers often suffers from undesirable staircase effect. To reduce this effect, this paper presents an improved TV minimization method for block-based CS by intra-prediction. The new method conducts intra-prediction block by block in the CS reconstruction process and generates a residual for the image block being decoded in the CS measurement domain. The gradient of the residual is sparser than that of the image itself, which can lead to better reconstruction quality in CS by TV regularization. The staircase effect can also be eliminated due to effective reconstruction of the residual. Furthermore, to suppress blocking artifacts caused by intra-prediction, an efficient adaptive in-loop deblocking filter was designed for post-processing during the CS reconstruction process. Experiments show competitive performances of the proposed hybrid method in comparison with state-of-the-art TV models for CS with respect to peak signal-to-noise ratio and the subjective visual quality.
Sparse Signal Recovery Using Markov Random Fields Compressive Sensing (CS) combines sampling and compression into a single sub- Nyquist linear measurement process for sparse and compressible signals. In this paper, we extend the theory of CS to include signals that are concisely repre- sented in terms of a graphical model. In particular, we use Markov Random Fields (MRFs) to represent sparse signals whose nonzero coefficien ts are clustered. Our new model-based recovery algorithm, dubbed Lattice Matching Pursuit(LaMP), stably recovers MRF-modeled signals using many fewer measurements and com- putations than the current state-of-the-art algorithms.
Exact Reconstruction of Sparse Signals via Nonconvex Minimization Several authors have shown recently that It is possible to reconstruct exactly a sparse signal from fewer linear measurements than would be expected from traditional sampling theory. The methods used involve computing the signal of minimum lscr1 norm among those having the given measurements. We show that by replacing the lscr1 norm with the lscrp norm with p < 1, exact reconstruction is possible ...
CoSaMP: Iterative signal recovery from incomplete and inaccurate samples Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix-vector multiplies with the sampling matrix. For compressible signals, the running time is just O(N log2 N), where N is the length of the signal. In applications, most signals of interest contain scant information relative to their ambient di- mension, but the classical approach to signal acquisition ignores this fact. We usually collect a complete representation of the target signal and process this representation to sieve out the ac- tionable information. Then we discard the rest. Contemplating this ugly inefficiency, one might ask if it is possible instead to acquire compressive samples. In other words, is there some type of measurement that automatically winnows out the information from a signal? Incredibly, the answer is sometimes yes. Compressive sampling refers to the idea that, for certain types of signals, a small number of nonadaptive samples carries sufficient information to approximate the signal well. Research in this area has two major components: Sampling: How many samples are necessary to reconstruct signals to a specified precision? What type of samples? How can these sampling schemes be implemented in practice? Reconstruction: Given the compressive samples, what algorithms can efficiently construct a signal approximation?
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.
Singularity detection and processing with wavelets The mathematical characterization of singularities with Lipschitz exponents is reviewed. Theorems that estimate local Lipschitz exponents of functions from the evolution across scales of their wavelet transform are reviewed. It is then proven that the local maxima of the wavelet transform modulus detect the locations of irregular structures and provide numerical procedures to compute their Lipschitz exponents. The wavelet transform of singularities with fast oscillations has a particular behavior that is studied separately. The local frequency of such oscillations is measured from the wavelet transform modulus maxima. It has been shown numerically that one- and two-dimensional signals can be reconstructed, with a good approximation, from the local maxima of their wavelet transform modulus. As an application, an algorithm is developed that removes white noises from signals by analyzing the evolution of the wavelet transform maxima across scales. In two dimensions, the wavelet transform maxima indicate the location of edges in images.<>
Compressed Sensing for Networked Data Imagine a system with thousands or millions of independent components, all capable of generating and communicating data. A man-made system of this complexity was unthinkable a few decades ago, but today it is a reality - computers, cell phones, sensors, and actuators are all linked to the Internet, and every wired or wireless device is capable of generating and disseminating prodigious volumes of data. This system is not a single centrally-controlled device, rather it is an ever-growing patchwork of autonomous systems and components, perhaps more organic in nature than any human artifact that has come before. And we struggle to manage and understand this creation, which in many ways has taken on a life of its own. Indeed, several international conferences are dedicated to the scientific study of emergent Internet phenomena. This article considers a particularly salient aspect of this struggle that revolves around large- scale distributed sources of data and their storage, transmission, and retrieval. The task of transmitting information from one point to another is a common and well-understood exercise. But the problem of efficiently transmitting or sharing information from and among a vast number of distributed nodes remains a great challenge, primarily because we do not yet have well developed theories and tools for distributed signal processing, communications, and information theory in large-scale networked systems. The problem is illustrated by a simple example. Consider a network of n nodes, each having a piece of information or data xj, j = 1,...,n. These data could be files to be shared, or simply scalar values corresponding to node attributes or sensor measurements. Let us assume that each xj is a scalar quantity for the sake of this illustration. Collectively these data x = (x1,...,xn)T, arranged in a vector, are called networked data to emphasize both the distributed nature of the data and the fact that they may be shared over the underlying communications infrastructure of the network. The networked data vector may be very large; n may be a thousand or a million or more.
On the quasi-Monte Carlo method with Halton points for elliptic PDEs with log-normal diffusion. This article is dedicated to the computation of the moments of the solution to elliptic partial differential equations with random, log-normally distributed diffusion coefficients by the quasi-Monte Carlo method. Our main result is that the convergence rate of the quasi-Monte Carlo method based on the Halton sequence for the moment computation depends only linearly on the dimensionality of the stochastic input parameters. In particular, we attain this rather mild dependence on the stochastic dimensionality without any randomization of the quasi-Monte Carlo method under consideration. For the proof of the main result, we require related regularity estimates for the solution and its powers. These estimates are also provided here. Numerical experiments are given to validate the theoretical findings.
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
Robust Regression and Lasso Lasso, or l1 regularized least squares, has been explored extensively for its remarkable sparsity properties. In this paper it is shown that the solution to Lasso, in addition to its sparsity, has robustness properties: it is the solution to a robust optimization problem. This has two important consequences. First, robustness provides a connection of the regularizer to a physical property, namely, protection from noise. This allows a principled selection of the regularizer, and in particular, generalizations of Lasso that also yield convex optimization problems are obtained by considering different uncertainty sets. Second, robustness can itself be used as an avenue for exploring different properties of the solution. In particular, it is shown that robustness of the solution explains why the solution is sparse. The analysis as well as the specific results obtained differ from standard sparsity results, providing different geometric intuition. Furthermore, it is shown that the robust optimization formulation is related to kernel density estimation, and based on this approach, a proof that Lasso is consistent is given, using robustness directly. Finally, a theorem is proved which states that sparsity and algorithmic stability contradict each other, and hence Lasso is not stable.
Induced uncertain linguistic OWA operators applied to group decision making The ordered weighted averaging (OWA) operator was developed by Yager [IEEE Trans. Syst., Man, Cybernet. 18 (1998) 183]. Later, Yager and Filev [IEEE Trans. Syst., Man, Cybernet.--Part B 29 (1999) 141] introduced a more general class of OWA operators called the induced ordered weighted averaging (IOWA) operators, which take as their argument pairs, called OWA pairs, in which one component is used to induce an ordering over the second components which are exact numerical values and then aggregated. The aim of this paper is to develop some induced uncertain linguistic OWA (IULOWA) operators, in which the second components are uncertain linguistic variables. Some desirable properties of the IULOWA operators are studied, and then, the IULOWA operators are applied to group decision making with uncertain linguistic information.
Application of FMCDM model to selecting the hub location in the marine transportation: A case study in southeastern Asia Hub location selection problems have become one of the most popular and important issues not only in the truck transportation and the air transportation, but also in the marine transportation. The main focus of this paper is on container transshipment hub locations in southeastern Asia. Transshipment is the fastest growing segment of the containerport market, resulting in significant scope to develop new transshipment terminal capacity to cater for future expected traffic flows. A shipping carrier not only calculates transport distances and operation costs, but also evaluates some qualitative conditions for existing hub locations and then selects an optimal container transshipment hub location in the region. In this paper, a fuzzy multiple criteria decision-making (FMCDM) model is proposed for evaluating and selecting the container transshipment hub port. Finally, the utilization of the proposed FMCDM model is demonstrated with a case study of hub locations in southeastern Asia. The results show that the FMCDM model proposed in this paper can be used to explain the evaluation and decision-making procedures of hub location selection well. In addition, the preferences are calculated for existing hub locations and these are then compared with a new proposed container transshipment hub location in the region, in this instance the Port of Shanghai. Furthermore, a sensitivity analysis is performed.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.1
0.033333
0.014286
0.00303
0.001163
0
0
0
0
0
0
0
0
0
Compressive light field photography using overcomplete dictionaries and optimized projections Light field photography has gained a significant research interest in the last two decades; today, commercial light field cameras are widely available. Nevertheless, most existing acquisition approaches either multiplex a low-resolution light field into a single 2D sensor image or require multiple photographs to be taken for acquiring a high-resolution light field. We propose a compressive light field camera architecture that allows for higher-resolution light fields to be recovered than previously possible from a single image. The proposed architecture comprises three key components: light field atoms as a sparse representation of natural light fields, an optical design that allows for capturing optimized 2D light field projections, and robust sparse reconstruction methods to recover a 4D light field from a single coded 2D projection. In addition, we demonstrate a variety of other applications for light field atoms and sparse coding, including 4D light field compression and denoising.
Compressed Sensing with Coherent and Redundant Dictionaries This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an ℓ1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of ℓ1-analysis for such problems.
Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization. Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.
Image denoising via sparse and redundant representations over learned dictionaries. We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods.
Compressed Sensing. Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0<ples1. The N most important coefficients in that expansion allow reconstruction with lscr2 error O(N1/2-1p/). It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients. Moreover, a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing. The nonadaptive measurements have the character of "random" linear combinations of basis/frame elements. Our results use the notions of optimal recovery, of n-widths, and information-based complexity. We estimate the Gel'fand n-widths of lscrp balls in high-dimensional Euclidean space in the case 0<ples1, and give a criterion identifying near- optimal subspaces for Gel'fand n-widths. We show that "most" subspaces are near-optimal, and show that convex optimization (Basis Pursuit) is a near-optimal way to extract information derived from these near-optimal subspaces
Statistical timing based on incomplete probabilistic descriptions of parameter uncertainty Existing approaches to timing analysis under uncertainty are based on restrictive assumptions. Statistical STA techniques assume that the full probabilistic distribution of parameter uncertainty is available; in reality, the complete probabilistic description often cannot be obtained. In this paper, a new paradigm for parameter uncertainty description is proposed as a way to consistently and rigorously handle partially available descriptions of parameter uncertainty. The paradigm is based on a theory of interval probabilistic models that permit handling uncertainty that is described in a distribution-free mode - just via the range, the mean, and the variance. This permits effectively handling multiple real-life challenges, including imprecise and limited information about the distributions of process parameters, parameters coming from different populations, and the sources of uncertainty that are too difficult to handle via full probabilistic measures (e.g. on-chip supply voltage variation). Specifically, analytical techniques for bounding the distributions of probabilistic interval variables are proposed. Besides, a provably correct strategy for fast Monte Carlo simulation based on probabilistic interval variables is introduced. A path-based timing algorithm implementing the novel modeling paradigm, as well as handling the traditional variability descriptions, has been developed. The results indicate the proposed algorithm can improve the upper bound of the 90(th)-percentile circuit delay, on average, by 5.3% across the ISCAS'85 benchmark circuits, compared to the worst-case timing estimates that use only the interval information of the partially specified parameters.
Some Defects in Finite-Difference Edge Finders This work illustrates and explains various artifacts in the output of five finite difference edge finders, those of J.F. Canny (1983, 1986), R.A. Boie et al. (1986) and R.A. Boie and I.J. Cox (1987), and three variations on that of D. Marr and E.C. Hildreth (1980), reimplemented with a common output format and method of noise suppression. These artifacts include gaps in boundaries, spurious boundaries, and deformation of region shape.
A Tutorial on Support Vector Machines for Pattern Recognition The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
A review on spectrum sensing for cognitive radio: challenges and solutions Cognitive radio is widely expected to be the next Big Bang in wireless communications. Spectrum sensing, that is, detecting the presence of the primary users in a licensed spectrum, is a fundamental problem for cognitive radio. As a result, spectrum sensing has reborn as a very active research area in recent years despite its long history. In this paper, spectrum sensing techniques from the optimal likelihood ratio test to energy detection, matched filtering detection, cyclostationary detection, eigenvalue-based sensing, joint space-time sensing, and robust sensing methods are reviewed. Cooperative spectrum sensing with multiple receivers is also discussed. Special attention is paid to sensing methods that need little prior information on the source signal and the propagation channel. Practical challenges such as noise power uncertainty are discussed and possible solutions are provided. Theoretical analysis on the test statistic distribution and threshold setting is also investigated.
A comparative study of ranking methods, similarity measures and uncertainty measures for interval type-2 fuzzy sets Ranking methods, similarity measures and uncertainty measures are very important concepts for interval type-2 fuzzy sets (IT2 FSs). So far, there is only one ranking method for such sets, whereas there are many similarity and uncertainty measures. A new ranking method and a new similarity measure for IT2 FSs are proposed in this paper. All these ranking methods, similarity measures and uncertainty measures are compared based on real survey data and then the most suitable ranking method, similarity measure and uncertainty measure that can be used in the computing with words paradigm are suggested. The results are useful in understanding the uncertainties associated with linguistic terms and hence how to use them effectively in survey design and linguistic information processing.
On Linear and Semidefinite Programming Relaxations for Hypergraph Matching The hypergraph matching problem is to find a largest collection of disjoint hyperedges in a hypergraph. This is a well-studied problem in combinatorial optimization and graph theory with various applications. The best known approximation algorithms for this problem are all local search algorithms. In this paper we analyze different linear and semidefinite programming relaxations for the hypergraph matching problem, and study their connections to the local search method. Our main results are the following: • We consider the standard linear programming relaxation of the problem. We provide an algorithmic proof of a result of Füredi, Kahn and Seymour, showing that the integrality gap is exactly k-1 + 1/k for k-uniform hypergraphs, and is exactly k - 1 for k-partite hypergraphs. This yields an improved approximation algorithm for the weighted 3-dimensional matching problem. Our algorithm combines the use of the iterative rounding method and the fractional local ratio method, showing a new way to round linear programming solutions for packing problems. • We study the strengthening of the standard LP relaxation by local constraints. We show that, even after linear number of rounds of the Sherali-Adams lift-and-project procedure on the standard LP relaxation, there are k-uniform hypergraphs with integrality gap at least k - 2. On the other hand, we prove that for every constant k, there is a strengthening of the standard LP relaxation by only a polynomial number of constraints, with integrality gap at most (k + 1)/2 for k-uniform hypergraphs. The construction uses a result in extremal combinatorics. • We consider the standard semidefinite programming relaxation of the problem. We prove that the Lovász v-function provides an SDP relaxation with integrality gap at most (k + 1)/2. The proof gives an indirect way (not by a rounding algorithm) to bound the ratio between any local optimal solution and any optimal SDP solution. This shows a new connection between local search and linear and semidefinite programming relaxations.
On Generalized Induced Linguistic Aggregation Operators In this paper, we define various generalized induced linguistic aggregation operators, including eneralized induced linguistic ordered weighted averaging (GILOWA) operator, generalized induced linguistic ordered weighted geometric (GILOWG) operator, generalized induced uncertain linguistic ordered weighted averaging (GIULOWA) operator, generalized induced uncertain linguistic ordered weighted geometric (GIULOWG) operator, etc. Each object processed by these operators consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are linguistic variables (or uncertain linguistic variables) and then aggregated. It is shown that the induced linguistic ordered weighted averaging (ILOWA) operator and linguistic ordered weighted averaging (LOWA) operator are the special cases of the GILOWA operator, induced linguistic ordered weighted geometric (ILOWG) operator and linguistic ordered weighted geometric (LOWG) operator are the special cases of the GILOWG operator, the induced uncertain linguistic ordered weighted averaging (IULOWA) operator and uncertain linguistic ordered weighted averaging (ULOWA) operator are the special cases of the GIULOWA operator, and that the induced uncertain linguistic ordered weighted geometric (IULOWG) operator and uncertain LOWG operator are the special cases of the GILOWG operator.
Application of FMCDM model to selecting the hub location in the marine transportation: A case study in southeastern Asia Hub location selection problems have become one of the most popular and important issues not only in the truck transportation and the air transportation, but also in the marine transportation. The main focus of this paper is on container transshipment hub locations in southeastern Asia. Transshipment is the fastest growing segment of the containerport market, resulting in significant scope to develop new transshipment terminal capacity to cater for future expected traffic flows. A shipping carrier not only calculates transport distances and operation costs, but also evaluates some qualitative conditions for existing hub locations and then selects an optimal container transshipment hub location in the region. In this paper, a fuzzy multiple criteria decision-making (FMCDM) model is proposed for evaluating and selecting the container transshipment hub port. Finally, the utilization of the proposed FMCDM model is demonstrated with a case study of hub locations in southeastern Asia. The results show that the FMCDM model proposed in this paper can be used to explain the evaluation and decision-making procedures of hub location selection well. In addition, the preferences are calculated for existing hub locations and these are then compared with a new proposed container transshipment hub location in the region, in this instance the Port of Shanghai. Furthermore, a sensitivity analysis is performed.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2
0.014286
0.011111
0.004651
0.000157
0
0
0
0
0
0
0
0
0
A Quadratic Modeling-Based Framework for Accurate Statistical Timing Analysis Considering Correlations The impact of parameter variations on timing due to process variations has become significant in recent years. In this paper, we present a statistical timing analysis (STA) framework with quadratic gate delay models that also captures spatial correlations. Our technique does not make any assumption about the distribution of the parameter variations, gate delays, and arrival times. We propose a Taylor-series expansion-based quadratic representation of gate delays and arrival times which are able to effectively capture the nonlinear dependencies that arise due to increasing parameter variations. In order to reduce the computational complexity introduced due to quadratic modeling during STA, we also propose an efficient linear modeling driven quadratic STA scheme. We ran two sets of experiments assuming the global parameters to have uniform and Gaussian distributions, respectively. On an average, the quadratic STA scheme had 20.5times speedup in runtime as compared to Monte Carlo simulations with an rms error of 0.00135 units between the two timing cummulative density functions (CDFs). The linear modeling driven quadratic STA scheme had 51.5times speedup in runtime as compared to Monte Carlo simulations with an rms error of 0.0015 units between the two CDFs. Our proposed technique is generic and can be applied to arbitrary variations in the underlying parameters under any spatial correlation model
Statistical static timing analysis using a skew-normal canonical delay model In its simplest form, a parameterized block based statistical static timing analysis (SSTA) is performed by assuming that both gate delays and the arrival times at various nodes are Gaussian random variables. These assumptions are not true in many cases. Quadratic models are used for more accurate analysis, but at the cost of increased computational complexity. In this paper, we propose a model based on skew-normal random variables. It can take into account the skewness in the gate delay distribution as well as the nonlinearity of the MAX operation. We derive analytical expressions for the moments of the MAX operator based on the conditional expectations. The computational complexity of using this model is marginally higher than the linear model based on Clark's approximations. The results obtained using this model match well with Monte-Carlo simulations.
Probabilistic interval-valued computation: toward a practical surrogate for statistics inside CAD tools Interval methods offer a general, fine-grain strategy for modeling correlated range uncertainties in numerical algorithms. We present a new, improved interval algebra that extends the classical affine form to a more rigorous statistical foundation. Range uncertainties now take the form of confidence intervals. In place of pessimistic interval bounds, we minimize the probability of numerical "escape"; this can tighten interval bounds by 10X, while yielding 10-100X speedups over Monte Carlo. The formulation relies on three critical ideas: liberating the affine model from the assumption of symmetric intervals; a unifying optimization formulation; and a concrete probabilistic model. We refer to these as probabilistic intervals, for brevity. Our goal is to understand where we might use these as a surrogate for expensive, explicit statistical computations. Results from sparse matrices and graph delay algorithms demonstrate the utility of the approach, and the remaining challenges.
Fast Monte Carlo estimation of timing yield with importance sampling and transistor-level circuit simulation Considerable effort has been expended in the electronic design automation community in trying to cope with the statistical timing problem. Most of this effort has been aimed at generalizing the static timing analyzers to the statistical case. On the other hand, detailed transistor-level simulations of the critical paths in a circuit are usually performed at the final stage of performance verification. We describe a transistor-level Monte Carlo (MC) technique which makes final transistor-level timing verification practically feasible. The MC method is used as a golden reference in assessing the accuracy of other timing yield estimation techniques. However, it is generally believed, that it can not be used in practice as it requires too many costly transistor-level simulations. We present a novel approach to constructing an improved MC estimator for timing yield which provides the same accuracy as standard MC but at a cost of much fewer transistor-level simulations. This improved estimator is based on a unique combination of a variance reduction technique, importance sampling, and a cheap but approximate gate delay model. The results we present demonstrate that our improved yield estimator achieves the same accuracy as standard MC at a cost reduction reaching several orders of magnitude.
A framework for block-based timing sensitivity analysis Since process and environmental variations can no longer be ignored in high-performance microprocessor designs, it is necessary to develop techniques for computing the sensitivities of the timing slacks to parameter variations. This additional slack information enables designers to examine paths that have large sensitivities to various parameters: such paths are not robust, even though they may have large nominal slacks and may hence be ignored in traditional timing analysis. We present a framework for block-based timing analysis, where the parameters are specified as ranges -- rather than statistical distributions which are hard to know in practice. We show that our approach -- which scales well with the number of processors -- is accurate at all values of the parameters within the specified bounds, and not just at the worst-case corner. This allows the designers to quantify the robustness of the design at any design point. We validate our approach on circuit blocks extracted from a commercial 45nm microprocessor.
Variability Driven Gate Sizing for Binning Yield Optimization High performance applications are highly affected by process variations due to considerable spread in their expected frequencies after fabrication. Typically ldquobinningrdquo is applied to those chips that are not meeting their performance requirement after fabrication. Using binning, such failing chips are sold at a loss (e.g., proportional to the degree that they are failing their performance requirement). This paper discusses a gate-sizing algorithm to minimize ldquoyield-lossrdquo associated with binning. We propose a binning yield-loss function as a suitable objective to be minimized. We show this objective is convex with respect to the size variables and consequently can be optimally and efficiently solved. These contributions are yet made without making any specific assumptions about the sources of variability or how they are modeled. We show computation of the binning yield-loss can be done via any desired statistical static timing analysis (SSTA) tool. The proposed technique is compared with a recently proposed sensitivity-based statistical sizer, a deterministic sizer with worst-case variability estimate, and a deterministic sizer with relaxed area constraint. We show consistent improvement compared to the sensitivity-based approach in quality of solution (final binning yield-loss value) as well as huge run-time gain. Moreover, we show that a deterministic sizer with a relaxed area constraint will also result in reasonably good binning yield-loss values for the extra area overhead.
Statistical Timing Analysis: From Basic Principles to State of the Art Static-timing analysis (STA) has been one of the most pervasive and successful analysis engines in the design of digital circuits for the last 20 years. However, in recent years, the increased loss of predictability in semiconductor devices has raised concern over the ability of STA to effectively model statistical variations. This has resulted in extensive research in the so-called statistical STA (SSTA), which marks a significant departure from the traditional STA framework. In this paper, we review the recent developments in SSTA. We first discuss its underlying models and assumptions, then survey the major approaches, and close by discussing its remaining key challenges.
Timing yield estimation from static timing analysis This paper presents a means for estimating parametric timing yield and guiding robust design for-quality in the presence of manufacturing and operating environment variations. Dual emphasis is on computational efficiency and providing meaningful robust-design guidance. Computational efficiency is achieved by basing the proposed methodology on a post-processing step applied to the report generated as a by-product of static timing analysis. Efficiency is also ensured by exploiting the fact that for small processing/environment variations, a linear model is adequate for capturing the resulting delay change. Meaningful design guidance is achieved by analyzing the timing-related influence of variations on a path-by-path basis, allowing designers perform a quality-oriented design pass focused on key paths. A coherent strategy is provided to handle both die-to-die and within-die variations. Examples from a PowerPC microprocessor illustrate the methodology and its capabilities
A Tutorial on Support Vector Machines for Pattern Recognition The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
Imaging via Compressive Sampling Image compression algorithms convert high-resolution images into a relatively small bit streams in effect turning a large digital data set into a substantially smaller one. This article introduces compressive sampling and recovery using convex programming.
Dynamic adaptive streaming over HTTP dataset The delivery of audio-visual content over the Hypertext Transfer Protocol (HTTP) got lot of attention in recent years and with dynamic adaptive streaming over HTTP (DASH) a standard is now available. Many papers cover this topic and present their research results, but unfortunately all of them use their own private dataset which -- in most cases -- is not publicly available. Hence, it is difficult to compare, e.g., adaptation algorithms in an objective way due to the lack of a common dataset which shall be used as basis for such experiments. In this paper, we present our DASH dataset including our DASHEncoder, an open source DASH content generation tool. We also provide basic evaluations of the different segment lengths, the influence of HTTP server settings, and, in this context, we show some of the advantages as well as problems of shorter segment lengths.
Future Multimedia Networking, Second International Workshop, FMN 2009, Coimbra, Portugal, June 22-23, 2009. Proceedings
Intelligent Analysis and Off-Line Debugging of VLSI Device Test Programs Today‘s microelectronics researchers design VLSI devices to achieve highlydifferentiated devices, both in performance and functionality. As VLSI devices become more complex, VLSI device testingbecomes more costly and time consuming. The increasing test complexity leadsto longer device test programs development time as well as more expensivetest systems, and debugging testprograms is a great burden to the test programs development.On the other hand, there is little formal theory of debugging, and attempts to develop a methodology of debugging are rare. The aim of the investigation in this paper is to create a theory to support analysis and debugging of VLSI device test programs, and then, on the basis of this theory, design and develop an off-line debugging environment, OLDEVDTP, for the creation, analysis, checking, identifying,error location, and correction of the device test programs off-line from thetarget VLSI test system, to achieve a dramatic cost and time reduction. In the paper, fuzzy comprehensive evaluation techniques are applied to the program analysis and debugging process to reduce restrictions caused by computational complexity. Analysis, design, and implementation of OLDEVDTP are also addressed in the paper.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.0525
0.06
0.016667
0.01
0.00625
0.004167
0.001364
0.000013
0
0
0
0
0
0
Toward total quality of experience: A QoE model in a communication ecosystem. In recent years, the quality of experience notion has become a major research theme within the telecommunications community. QoE is an assessment of the human experience when interacting with technology and business entities in a particular context. A communication ecosystem encompasses various domains such as technical aspects, business models, human behavior, and context. For each aspect of a co...
Queuing based optimal scheduling mechanism for QoE provisioning in cognitive radio relaying network In cognitive radio network (CRN), secondary users (SU) can share the licensed spectrum with the primary users (PU). Compared with the traditional network, spectrum utilization in CRN will be greatly improved. In order to ensure the performance of SUs as well as PU, wireless relaying can be employed to improve the system capacity. Meanwhile, quality-of-experience (QoE) should be considered and provisioned in the relay scheduling scheme to ensure user experience and comprehensive network performance. In this paper, we studied a QoE provisioning mechanism for a queuing based optimal relay scheduling problem in CRN. We designed a QoE provisioning scheme with multiple optimized goals about higher capacity and lower packet loss probability. The simulation results showed that our mechanism could get a much better performance on packet loss with suboptimum system capacity. And it indicated that our mechanism could guarantee a better user experience through the specific QoS-QoE mapping models. So our mechanism can improve the network performance and user experience comprehensively.
Mobile quality of experience: Recent advances and challenges Quality of Experience (QoE) is important from both a user perspective, since it assesses the quality a user actually experiences, and a network perspective, since it is important for a provider to dimension its network to support the necessary QoE. This paper presents some recent advances on the modeling and measurement of QoE with an emphasis on mobile networks. It also identifies key challenges for mobile QoE.
Personalized user engagement modeling for mobile videos. The ever-increasing mobile video services and users’ demand for better video quality have boosted research into the video Quality-of-Experience. Recently, the concept of Quality-of-Experience has evolved to Quality-of-Engagement, a more actionable metric to evaluate users’ engagement to the video services and directly relate to the service providers’ revenue model. Existing works on user engagement mostly adopt uniform models to quantify the engagement level of all users, overlooking the essential distinction of individual users. In this paper, we first conduct a large-scale measurement study on a real-world data set to demonstrate the dramatic discrepancy in user engagement, which implies that a uniform model is not expressive enough to characterize the distinctive engagement pattern of each user. To address this problem, we propose PE, a personalized user engagement model for mobile videos, which, for the first time, addresses the user diversity in the engagement modeling. Evaluation results on a real-world data set show that our system significantly outperforms the uniform engagement models, with a 19.14% performance gain.
QoE-based transport optimization for video delivery over next generation cellular networks Video streaming is considered as one of the most important and challenging applications for next generation cellular networks. Current infrastructures are not prepared to deal with the increasing amount of video traffic. The current Internet, and in particular the mobile Internet, was not designed with video requirements in mind and, as a consequence, its architecture is very inefficient for handling video traffic. Enhancements are needed to cater for improved Quality of Experience (QoE) and improved reliability in a mobile network. In this paper we design a novel dynamic transport architecture for next generation mobile networks adapted to video service requirements. Its main novelty is the transport optimization of video delivery that is achieved through a QoE oriented redesign of networking mechanisms as well as the integration of Content Delivery Networks (CDN) techniques.
VoIP-based calibration of the DQX model In the Internet Protocol (IP) ecosystem, Quality-of-Experience (QoE) is important information needed by Service Providers (SP) to improve their services. However, end-user's satisfaction, which can be reflected by QoE metrics, cannot be easily measured like technical variables, such as bandwidth and latency. QoE can either be estimated through mathematical models or it can be measured through an experimental setup. In this work a Voice-over-Internet Protocol-based (VoIP) QoE measurement setup has been designed to capture end-user's QoE in VoIP services. The data measured during these experiments are used to define all necessary parameters of the Deterministic QoE model (DQX) in this VoIP scenario. Such a calibration of the model is essential to adapt it to the particular service and its technical and non-technical conditions in which it is used. Furthermore, those DQX results achieved are compared with those results of the IQX Hypothesis and the E-Model, being proposed by the ITU-T. Thus, it is finally shown that DQX can capture more accurately end-user's QoE in VoIP scenarios.
Low-power wireless sensor nodes for ubiquitous long-term biomedical signal monitoring In the past few years, the use of wireless sensor nodes for remote health care monitoring has been advocated as an attractive alternative to the traditional hospital-centric health care system from both the economic perspective and the patient comfort viewpoint. The semiconductor industry plays a crucial role in making the changes in the health care system a reality. User acceptance of remote health monitoring systems depends on their comfort level, among other factors. The comfort level directly translates to the form factor, which is ultimately defined by the battery size and system power consumption. This article introduces low-power wireless sensor nodes for biomedical applications that are capable of operating autonomously or on very small batteries. In particular, we take a closer look at component-level power optimizations for the radio and the digital signal processing core as well as the trade-off between radio power consumption and on-node processing. We also provide a system-level model for WSNs that helps in guiding the power optimization process with respect to various trade-offs.
Studying the experience of mobile applications used in different contexts of daily life Mobile applications and services increasingly assist us in our daily life situations, fulfilling our needs for information, communication, entertainment or leisure. However, user acceptance of a mobile application depends on at least two conditions; the application's perceived Quality of Experience (QoE) and the appropriateness of the application to the user's situation and context. Yet, there is generally a weak understanding of a mobile user's QoE and the factors influencing it. The mobile user's experience is related to the Quality of Service (QoS) provided by the underlying service and network infrastructures, which provides a starting point for our work. We present "work-in-progress" results from an ongoing study of Android phone users. In this study, we aim to derive and improve understanding of their QoE in different situations and daily life environments. In particular, we evaluate the user's qualitative QoE for a set of widely used mobile applications in the users' natural environments and different contexts, and we analyze this experience and its relation to the underlying quantitative QoS. In our approach we collect both QoE and QoS measures through a combination of user, application and network input from mobile phones. We present initial data acquired in the study and derived from that, a set of preliminary implications for mobile applications design.
A near optimal QoE-driven power allocation scheme for SVC-based video transmissions over MIMO systems In this paper, we propose a near optimal power allocation scheme, which maximizes the quality of experience (QoE), for scalable video coding (SVC) based video transmissions over multi-input multi-output (MIMO) systems. This scheme tries to optimize the received video quality according to video frame-error-rate (FER), which may be caused by either transmission errors in physical (PHY) layer or video coding structures in application (APP) layer. Due to the complexity of the original optimization problem, we decompose it into several sub-problems, which can then be solved by classic convex optimization methods. Detailed algorithms with corresponding theoretical derivations are provided. Simulations with real video traces demonstrate the effectiveness of our proposed scheme.
ELASTIC: A Client-Side Controller for Dynamic Adaptive Streaming over HTTP (DASH) Today, video distribution platforms use adaptive video streaming to deliver the maximum Quality of Experience to a wide range of devices connected to the Internet through different access networks. Among the techniques employed to implement video adaptivity, the stream-switching over HTTP is getting a wide acceptance due to its deployment and implementation simplicity. Recently it has been shown that the client-side algorithms proposed so far generate an on-off traffic pattern that may lead to unfairness and underutilization when many video flows share a bottleneck. In this paper we propose ELASTIC (fEedback Linearization Adaptive STreamIng Controller), a client-side controller designed using feedback control theory that does not generate an on-off traffic pattern. By employing a controlled testbed, allowing bandwidth capacity and delays to be set, we compare ELASTIC with other client-side controllers proposed in the literature. In particular, we have checked to what extent the considered algorithms are able to: 1) fully utilize the bottleneck, 2) fairly share the bottleneck, 3) obtain a fair share when TCP greedy flows share the bottleneck with video flows. The obtained results show that ELASTIC achieves a very high fairness and is able to get the fair share when coexisting with TCP greedy flows.
Toward a generalized theory of uncertainty (GTU): an outline It is a deep-seated tradition in science to view uncertainty as a province of probability theory. The generalized theory of uncertainty (GTU) which is outlined in this paper breaks with this tradition and views uncertainty in a much broader perspective.Uncertainty is an attribute of information. A fundamental premise of GTU is that information, whatever its form, may be represented as what is called a generalized constraint. The concept of a generalized constraint is the centerpiece of GTU. In GTU, a probabilistic constraint is viewed as a special-albeit important-instance of a generalized constraint.A generalized constraint is a constraint of the form X isr R, where X is the constrained variable, R is a constraining relation, generally non-bivalent, and r is an indexing variable which identifies the modality of the constraint, that is, its semantics. The principal constraints are: possibilistic (r=blank); probabilistic (r=p); veristic (r=v); usuality (r=u); random set (r=rs); fuzzy graph (r=fg); bimodal (r=bm); and group (r=g). Generalized constraints may be qualified, combined and propagated. The set of all generalized constraints together with rules governing qualification, combination and propagation constitutes the generalized constraint language (GCL).The generalized constraint language plays a key role in GTU by serving as a precisiation language for propositions, commands and questions expressed in a natural language. Thus, in GTU the meaning of a proposition drawn from a natural language is expressed as a generalized constraint. Furthermore, a proposition plays the role of a carrier of information. This is the basis for equating information to a generalized constraint.In GTU, reasoning under uncertainty is treated as propagation of generalized constraints, in the sense that rules of deduction are equated to rules which govern propagation of generalized constraints. A concept which plays a key role in deduction is that of a protoform (abbreviation of prototypical form). Basically, a protoform is an abstracted summary-a summary which serves to identify the deep semantic structure of the object to which it applies. A deduction rule has two parts: symbolic-expressed in terms of protoforms-and computational.GTU represents a significant change both in perspective and direction in dealing with uncertainty and information. The concepts and techniques introduced in this paper are illustrated by a number of examples.
Theory and Implementation of an Analog-to-Information Converter using Random Demodulation The new theory of compressive sensing enables direct analog-to-information conversion of compressible signals at sub-Nyquist acquisition rates. The authors develop new theory, algorithms, performance bounds, and a prototype implementation for an analog-to-information converter based on random demodulation. The architecture is particularly apropos for wideband signals that are sparse in the time-frequency plane. End-to-end simulations of a complete transistor-level implementation prove the concept under the effect of circuit nonidealities.
Some specific types of fuzzy relation equations In this paper we study some specific types of fuzzy equations. More specifically, we focus on analyzing equations involving two fuzzy subsets of the same referential and a fuzzy relation defined over fuzzy subsets.
Study on the QoE for VoIP Networks.
1.01337
0.013623
0.013623
0.013623
0.011583
0.011111
0.00571
0.002367
0.000153
0.000003
0
0
0
0
A hybrid learning algorithm for a class of interval type-2 fuzzy neural networks In real life, information about the world is uncertain and imprecise. The cause of this uncertainty is due to: deficiencies on given information, the fuzzy nature of our perception of events and objects, and on the limitations of the models we use to explain the world. The development of new methods for dealing with information with uncertainty is crucial for solving real life problems. In this paper three interval type-2 fuzzy neural network (IT2FNN) architectures are proposed, with hybrid learning algorithm techniques (gradient descent backpropagation and gradient descent with adaptive learning rate backpropagation). At the antecedents layer, a interval type-2 fuzzy neuron (IT2FN) model is used, and in case of the consequents layer an interval type-1 fuzzy neuron model (IT1FN), in order to fuzzify the rule's antecedents and consequents of an interval type-2 Takagi-Sugeno-Kang fuzzy inference system (IT2-TSK-FIS). IT2-TSK-FIS is integrated in an adaptive neural network, in order to take advantage the best of both models. This provides a high order intuitive mechanism for representing imperfect information by means of use of fuzzy If-Then rules, in addition to handling uncertainty and imprecision. On the other hand, neural networks are highly adaptable, with learning and generalization capabilities. Experimental results are divided in two kinds: in the first one a non-linear identification problem for control systems is simulated, here a comparative analysis of learning architectures IT2FNN and ANFIS is done. For the second kind, a non-linear Mackey-Glass chaotic time series prediction problem with uncertainty sources is studied. Finally, IT2FNN proved to be more efficient mechanism for modeling real-world problems.
Fuzzy Composite Concepts based on human reasoning Fuzzy Logic Systems (FLSs) provide a proven toolset in mimicking human reasoning. In this paper, we will present the idea of Fuzzy Composite Concepts (FCCs) which allow for a closer imitation of human reasoning in terms of integrating a large number of parameters into a single concept suitable for higher level reasoning. FCCs are based on standard FLSs and transparently extend them to provide intuitively interpretable rule bases and improve resilience and reusability of the overall FLS. We are providing an overview of the philosophical concepts behind FCCs and discuss their applicability. We describe the implementation of FCCs and demonstrate their benefits using real world examples based on our work in Ambient Intelligent Environments.
Computational Intelligence Software for Interval Type-2 Fuzzy Logic A software tool for interval type-2 fuzzy logic is presented in this article. The software tool includes a graphical user interface for construction, edition, and observation of the fuzzy systems. The Interval Type-2 Fuzzy Logic System Toolbox (IT2FLS) has a user-friendly environment for interval type-2 fuzzy logic inference system development. Tools that cover the different phases of the fuzzy system design process, from the initial description phase, to the final implementation phase, are presented as part of the Toolbox. The Toolbox's best properties are the capacity to develop complex systems and the flexibility that permits the user to extend the availability of functions for working with type-2 fuzzy operators, linguistic variables, interval type-2 membership functions, defuzzification methods, and the evaluation of interval type-2 fuzzy inference systems. The toolbox can be used for educational and research purposes. (c) 2011 Wiley Periodicals, Inc. Comput Appl Eng Educ 21: 737-747, 2013
On characterization of generalized interval-valued fuzzy rough sets on two universes of discourse This paper proposes a general study of (I,T)-interval-valued fuzzy rough sets on two universes of discourse integrating the rough set theory with the interval-valued fuzzy set theory by constructive and axiomatic approaches. Some primary properties of interval-valued fuzzy logical operators and the construction approaches of interval-valued fuzzy T-similarity relations are first introduced. Determined by an interval-valued fuzzy triangular norm and an interval-valued fuzzy implicator, a pair of lower and upper generalized interval-valued fuzzy rough approximation operators with respect to an arbitrary interval-valued fuzzy relation on two universes of discourse is then defined. Properties of I-lower and T-upper interval-valued fuzzy rough approximation operators are examined based on the properties of interval-valued fuzzy logical operators discussed above. Connections between interval-valued fuzzy relations and interval-valued fuzzy rough approximation operators are also established. Finally, an operator-oriented characterization of interval-valued fuzzy rough sets is proposed, that is, interval-valued fuzzy rough approximation operators are characterized by axioms. Different axiom sets of I-lower and T-upper interval-valued fuzzy set-theoretic operators guarantee the existence of different types of interval-valued fuzzy relations which produce the same operators.
Progressive design methodology for complex engineering systems based on multiobjective genetic algorithms and linguistic decision making This work focuses on a design methodology that aids in design and development of complex engineering systems. This design methodology consists of simulation, optimization and decision making. Within this work a framework is presented in which modelling, multi-objective optimization and multi criteria decision making techniques are used to design an engineering system. Due to the complexity of the designed system a three-step design process is suggested. In the first step multi-objective optimization using genetic algorithm is used. In the second step a multi attribute decision making process based on linguistic variables is suggested in order to facilitate the designer to express the preferences. In the last step the fine tuning of selected few variants are performed. This methodology is named as progressive design methodology. The method is applied as a case study to design a permanent magnet brushless DC motor drive and the results are compared with experimental values.
Multi-step prediction of pulmonary infection with the use of evolutionary fuzzy cognitive maps The task of prediction in the medical domain is a very complex one, considering the level of vagueness and uncertainty management. The main objective of the presented research is the multi-step prediction of state of pulmonary infection with the use of a predictive model learnt on the basis of changing with time data. The contribution of this paper is twofold. In the application domain, in order to predict the state of pneumonia, the approach of fuzzy cognitive maps (FCMs) is proposed as an easy of use, interpretable, and flexible predictive model. In the theoretical part, addressing the requirements of the medical problem, a multi-step enhancement of the evolutionary algorithm applied to learn the FCM was introduced. The advantage of using our method was justified theoretically and then verified experimentally. The results of our investigation seem to be encouraging, presenting the advantage of using the proposed multi-step prediction approach.
Comment on: "Image thresholding using type II fuzzy sets". Importance of this method In this work we develop some reflections on the thresholding algorithm proposed by Tizhoosh in [16]. The purpose of these reflections is to complete the considerations published recently in [17,18] on said algorithm. We also prove that under certain constructions, Tizhoosh's algorithm makes it possible to obtain additional information from commonly used fuzzy algorithms.
A 2uFunction representation for non-uniform type-2 fuzzy sets: Theory and design The theoretical and computational complexities involved in non-uniform type-2 fuzzy sets (T2 FSs) are main obstacles to apply these sets to modeling high-order uncertainties. To reduce the complexities, this paper introduces a 2uFunction representation for T2 FSs. This representation captures the ideas from probability theory. By using this representation, any non-uniform T2 FS can be represented by a function of two uniform T2 FSs. In addition, any non-uniform T2 fuzzy logic system (FLS) can be indirectly designed by two uniform T2 FLSs. In particular, a 2uFunction-based trapezoid T2 FLS is designed. Then, it is applied to the problem of forecasting Mackey-Glass time series corrupted by two kinds of noise sources: (1) stationary and (2) non-stationary additive noises. Finally, the performance of the proposed FLS is compared by (1) other types of FLS: T1 FLS and uniform T2 FLS, and (2) other studies: ANFIS [54], IT2FNN-1 [54], T2SFLS [3] and Q-T2FLS [35]. Comparative results show that the proposed design has a low prediction error as well as is suitable for online applications.
Multivariate modeling and type-2 fuzzy sets This paper explores the link between type-2 fuzzy sets and multivariate modeling. Elements of a space X are treated as observations fuzzily associated with values in a multivariate feature space. A category or class is likewise treated as a fuzzy allocation of feature values (possibly dependent on values in X). We observe that a type-2 fuzzy set on X generated by these two fuzzy allocations captures imprecision in the class definition and imprecision in the observations. In practice many type-2 fuzzy sets are in fact generated in this way and can therefore be interpreted as the output of a classification task. We then show that an arbitrary type-2 fuzzy set can be so constructed, by taking as a feature space a set of membership functions on X. This construction presents a new perspective on the Representation Theorem of Mendel and John. The multivariate modeling underpinning the type-2 fuzzy sets can also constrain realizable forms of membership functions. Because averaging operators such as centroid and subsethood on type-2 fuzzy sets involve a search for optima over membership functions, constraining this search can make computation easier and tighten the results. We demonstrate how the construction can be used to combine representations of concepts and how it therefore provides an additional tool, alongside standard operations such as intersection and subsethood, for concept fusion and computing with words.
An Approach To Interval-Valued R-Implications And Automorphisms The aim of this work is to introduce an approach for interval-valued R-implications, which satisfy some analogous properties of R-implications. We show that the best interval representation of an R-implication that is obtained from a left continuous t-norm coincides with the interval-valued R-implication obtained from the best interval representation of such t-norm, whenever this is an inclusion monotonic interval function. This provides, under this condition, a nice characterization for the best interval representation of an R-implication, which is also an interval-valued R-implication. We also introduce interval-valued automorphisms as the best interval representations of automorphisms. It is shown that interval automorphisms act on interval R-implications, generating other interval R-implications.
Preservation Of Properties Of Interval-Valued Fuzzy Relations The goal of this paper is to consider properties of the composition of interval-valued fuzzy relations which were introduced by L.A. Zadeh in 1975. Fuzzy set theory turned out to be a useful tool to describe situations in which the data are imprecise or vague. Interval-valued fuzzy set theory is a generalization of fuzzy set theory which was introduced also by Zadeh in 1965. This paper generalizes some properties of interval matrices considered by Pekala (2007) on these of interval-valued fuzzy relations.
Systemunterstützt individualisierte Kundenansprache in der Mehrkanalwelt der Finanzdienstleistungsbranche - Repräsentation der Einstellungen von Kunden in einem Kundenmodell
Nonparametric sparsity and regularization In this work we are interested in the problems of supervised learning and variable selection when the input-output dependence is described by a nonlinear function depending on a few variables. Our goal is to consider a sparse nonparametric model, hence avoiding linear or additive models. The key idea is to measure the importance of each variable in the model by making use of partial derivatives. Based on this intuition we propose a new notion of nonparametric sparsity and a corresponding least squares regularization scheme. Using concepts and results from the theory of reproducing kernel Hilbert spaces and proximal methods, we show that the proposed learning algorithm corresponds to a minimization problem which can be provably solved by an iterative procedure. The consistency properties of the obtained estimator are studied both in terms of prediction and selection performance. An extensive empirical analysis shows that the proposed method performs favorably with respect to the state-of-the-art methods.
Pre-ATPG path selection for near optimal post-ATPG process space coverage Path delay testing is becoming increasingly important for high-performance chip testing in the presence of process variation. To guarantee full process space coverage, the ensemble of critical paths of all chips irrespective of their manufacturing process conditions needs to be tested, as different chips may have different critical paths. Existing coverage-based path selection techniques, however, suffer from the loss of coverage after ATPG (automatic test pattern generation), i.e., although the pre-ATPG path selection achieves good coverage, after ATPG, the coverage can be severely reduced as many paths turn out to be unsensitizable. This paper presents a novel path selection algorithm that, without running ATPG, selects a set of n paths to achieve near optimal post-ATPG coverage. Details of the algorithm and its optimality conditions are discussed. Experimental results show that, compared to the state-of-the-art, the proposed algorithm achieves not only superior post-ATPG coverage, but also significant runtime speedup.
1.010402
0.014286
0.014286
0.010177
0.007876
0.007143
0.003882
0.001442
0.00034
0.000058
0.000001
0
0
0
Principle hessian direction based parameter reduction for interconnect networks with process variation As CMOS technology enters the nanometer regime, the increasing process variation is bringing manifest impact on circuit performance. To accurately take account of both global and local process variations, a large number of random variables (or parameters) have to be incorporated into circuit models. This measure in turn raises the complexity of the circuit models. The current paper proposes a Principle Hessian Direction (PHD) based parameter reduction approach for interconnect networks. The proposed approach relies on each parameter's impact on circuit performance to decide whether keeping or reducing the parameter. Compared with existing principle component analysis(PCA) method, this performance based property provides us a significantly smaller parameter set after reduction. The experimental results also support our conclusions. In interconnect cases, the proposed method reduces 70% of parameters. In some cases (the mesh example in the current paper), the new approach leads to an 85% reduction. We also tested ISCAS benchmarks. In all cases, an average of 53% of reductionis observed with less than 3% error in mean and less than 8% error in variation.
Hierarchical Modeling, Optimization, and Synthesis for System-Level Analog and RF Designs The paper describes the recent state of the art in hierarchical analog synthesis, with a strong emphasis on associated techniques for computer-aided model generation and optimization. Over the past decade, analog design automation has progressed to the point where there are industrially useful and commercially available tools at the cell level-tools for analog components with 10-100 devices. Automated techniques for device sizing, for layout, and for basic statistical centering have been successfully deployed. However, successful component-level tools do not scale trivially to system-level applications. While a typical analog circuit may require only 100 devices, a typical system such as a phase-locked loop, data converter, or RF front-end might assemble a few hundred such circuits, and comprise 10 000 devices or more. And unlike purely digital systems, mixed-signal designs typically need to optimize dozens of competing continuous-valued performance specifications, which depend on the circuit designer's abilities to successfully exploit a range of nonlinear behaviors across levels of abstraction from devices to circuits to systems. For purposes of synthesis or verification, these designs are not tractable when considered "flat." These designs must be approached with hierarchical tools that deal with the system's intrinsic design hierarchy. This paper surveys recent advances in analog design tools that specifically deal with the hierarchical nature of practical analog and RF systems. We begin with a detailed survey of algorithmic techniques for automatically extracting a suitable nonlinear macromodel from a device-level circuit. Such techniques are critical to both verification and synthesis activities for complex systems. We then survey recent ideas in hierarchical synthesis for analog systems and focus in particular on numerical techniques for handling the large number of degrees of freedom in these designs and for exploring the space of performance tradeoffs ear- - ly in the design process. Finally, we briefly touch on recent ideas for accommodating models of statistical manufacturing variations in these tools and flows
Efficient moment estimation with extremely small sample size via bayesian inference for analog/mixed-signal validation A critical problem in pre-Silicon and post-Silicon validation of analog/mixed-signal circuits is to estimate the distribution of circuit performances, from which the probability of failure and parametric yield can be estimated at all circuit configurations and corners. With extremely small sample size, traditional estimators are only capable of achieving a very low confidence level, leading to either over-validation or under-validation. In this paper, we propose a multi-population moment estimation method that significantly improves estimation accuracy under small sample size. In fact, the proposed estimator is theoretically guaranteed to outperform usual moment estimators. The key idea is to exploit the fact that simulation and measurement data collected under different circuit configurations and corners can be correlated, and are conditionally independent. We exploit such correlation among different populations by employing a Bayesian framework, i.e., by learning a prior distribution and applying maximum a posteriori estimation using the prior. We apply the proposed method to several datasets including post-silicon measurements of a commercial highspeed I/O link, and demonstrate an average error reduction of up to 2×, which can be equivalently translated to significant reduction of validation time and cost.
Principle Hessian direction based parameter reduction with process variation As CMOS technology enters the nanometer regime, the increasing process variation is bringing manifest impact on circuit performance. In this paper, we propose a Principle Hessian Direction (PHD) based parameter reduction approach. This new approach relies on the impact of each parameter on circuit performance to decide whether keeping or reducing the parameter. Compared with the existing principle component analysis (PCA) method, this performance based property provides us a significantly smaller set of parameters after reduction. The experimental results also support our conclusions. In all cases, an average of 53% of reduction is observed with less than 3% error in the mean value and less than 8% error in the variation.
Fast variational interconnect delay and slew computation using quadratic models Interconnects constitute a dominant source of circuit delay for modern chip designs. The variations of critical dimensions in modern VLSI technologies lead to variability in interconnect performance that must be fully accounted for in timing verification. However, handling a multitude of inter-die/intra-die variations and assessing their impacts on circuit performance can dramatically complicate the timing analysis. In this paper, a practical interconnect delay and slew analysis technique is presented to facilitate efficient evaluation of wire performance variability. By harnessing a collection of computationally efficient procedures and closed-form formulas, process variations are directly mapped into the variability of the output delay and slew. An efficient method based on sensitivity analysis is implemented to calculate driving point models under variations for gate-level timing analysis. The proposed adjoint technique not only provides statistical performance variations of the interconnect network under analysis, but also produces delay and slew expressions parameterized in the underlying process variations in a quadratic parametric form. As such, it can be harnessed to enable statistical timing analysis while considering important statistical correlations. Our experimental results have indicated that the presented analysis is accurate regardless of location of sink nodes and it is also robust over a wide range of process variations.
Fast second-order statistical static timing analysis using parameter dimension reduction The ability to account for the growing impacts of multiple process variations in modern technologies is becoming an integral part of nanometer VLSI design. Under the context of timing analysis, the need for combating process variations has sparkled a growing body of statistical static timing analysis (SSTA) techniques. While first-order SSTA techniques enjoy good runtime efficiency desired for tackling large industrial designs, more accurate second-order SSTA techniques have been proposed to improve the analysis accuracy, but at the cost of high computational complexity. Although many sources of variations may impact the circuit performance, considering a large number of inter-die and intra-die variations in the traditional SSTA analysis is very challenging. In this paper, we address the analysis complexity brought by high parameter dimensionality in static timing analysis and propose an accurate yet fast second-order SSTA algorithm based upon novel parameter dimension reduction. By developing reduced-rank regression based parameter reduction algorithms within block-based SSTA flow, we demonstrate that accurate second order SSTA analysis can be extended to a much higher parameter dimensionality than what is possible before. Our experimental results have shown that the proposed parameter reduction can achieve up to 10X parameter dimension reduction and lead to significantly improved second-order SSTA analysis under a large set of process variations.
An Evaluation Method of the Number of Monte Carlo STA Trials for Statistical Path Delay Analysis We present an evaluation method for estimating the lower bound number of Monte Carlo STA trials required to obtain at least one sample which falls within top-k % of its parent population. The sample can be used to ensure that target designs are timing-error free with a predefined probability using the minimum computational cost. The lower bound number is represented as a closed-form formula which is general enough to be applied to other verifications. For validation, Monte Carlo STA was carried out on various benchmark data including ISCAS circuits. The minimum number of Monte Carlo runs determined using the proposed method successfully extracted one or more top-k % delay instances.
Measurement and characterization of pattern dependent process variations of interconnect resistance, capacitance and inductance in nanometer technologies Process variations have become a serious concern for nanometer technologies. The interconnect and device variations include inter-and intra-die variations of geometries, as well as process and electrical parameters. In this paper, pattern (i.e. density, width and space) dependent interconnect thickness and width variations are studied based on a well-designed test chip in a 90 nm technology. The parasitic resistance and capacitance variations due to the process variations are investigated, and process-variation-aware extraction techniques are proposed. In the test chip, electrical and physical measurements show strong metal thickness and width variations mainly due to chemical mechanical polishing (CMP) in nanometer technologies. The loop inductance dependence of return patterns is also validated in the test chip. The proposed new characterization methods extract interconnect RC variations as a function of metal density, width and space. Simulation results show excellent agreement between on-wafer measurements and extractions of various RC structures, including a set of metal loaded/unloaded ring oscillators in a complex wiring environment.
An exact algorithm for the statistical shortest path problem Graph algorithms are widely used in VLSI CAD. Traditional graph algorithms can handle graphs with deterministic edge weights. As VLSI technology continues to scale into nanometer designs, we need to use probability distributions for edge weights in order to model uncertainty due to parameter variations. In this paper, we consider the statistical shortest path (SSP) problem. Given a graph G, the edge weights of G are random variables. For each path P in G, let LP be its length, which is the sum of all edge weights on P. Clearly LP is a random variable and we let muP, and omegaP 3 be its mean and variance, respectively. In the SSP problem, our goal is to find a path P connecting two given vertices to minimize the cost function mup, + Phi (omegaP 2) where Phi is an arbitrary function. (For example, if Phi (times) equiv the cost function is muP , + 3omegaP.) To minimize uncertainty in the final result, it is meaningful to look for paths with bounded variance, i.e., omegaP 2 les B for a given fixed bound B. In this paper, we present an exact algorithm to solve the SSP problem in O(B(V + E)) time where V and E are the numbers of vertices and edges, respectively, in G. Our algorithm is superior to previous algorithms for SSP problem because we can handle: 1) general graphs (unlike previous works applicable only to directed acyclic graphs), 2) arbitrary edge-weight distributions (unlike previous algorithms designed only for specific distributions such as Gaussian), and 3) general cost function (none of the previous algorithms can even handle the cost function mu P, + 3omegaP. Finally, we discuss applications of the SSP problem to maze routing, buffer insertions, and timing analysis under parameter variations
Algorithms in FastImp: a fast and wide-band impedance extraction program for complicated 3-D geometries In this paper, we describe the algorithms used in FastImp, a program for accurate analysis of wide-band electromagnetic effects in very complicated geometries of conductors. The program is based on a recently developed surface integral formulation and a precorrected fast Fourier transform (FFT) accelerated iterative method, but includes a new piecewise quadrature panel integration scheme, a new scaling and preconditioning technique as well as a generalized grid interpolation and projection strategy. Computational results are given on a variety of integrated circuit interconnect structures to demonstrate that FastImp is robust and can accurately analyze very complicated geometries of conductors.
Image denoising via sparse and redundant representations over learned dictionaries. We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods.
Concurrent Behaviour: Sequences, Processes and Axioms Two ways of describing the behaviour of concurrent systems have widely been suggested: arbitrany interleaving and partial orders. Sometimes the latter has been claimed superior because concurrency is represented in a "true" way; on the other hand, some authors have claimed that the former is sufficient for all practical purposes.
IPTV quality assessment system Due to the Increasing deployment of real-time multimedia services like IPTV and videoconferencing, the Internet has new Challenges. These new real-time Applications require a reliable performance of the network so as to provide a good Quality of Service (QoS) so it is important for the services providers to estimate the quality offered; and regardless of the transport network to know the quality perceived by the user. For this it is important to have tools to evaluate the quality of service provided. This paper presents a system for IPTV quality assessment. This will allow us to study the user's perceived quality for different codecs, bit rates, frame rates and video resolutions, and the impact of the network packet loss rate, in order to determine the objective and subjective quality. We propose an application simulating packet loss as a function of network parameters, which can be used to obtain the received video with different network impairments, without the need for transmitting it. It has two main advantages: first, it avoids the need of transmitting the video a number of times; second, it allows test repeatability.
Efficient Decision-Making Scheme Based on LIOWAD. A new decision making method called linguistic induced ordered weighted averaging distance (LIOWAD) operator by using induced aggregation operators and linguistic information in the Hamming distance. This aggregation operator provides a parameterized family of linguistic aggregation operators that includes the maximum distance, the minimum distance, the linguistic normalized Hamming distance, the linguistic weighted Hamming distance and the linguistic ordered weighted averaging distance, among others. So give special attention to the analysis of different particular types of LIOWAD operators. End the paper with an application of the new approach in a decision making problem about selection of investments under linguistic environment.
1.071582
0.0147
0.012193
0.012153
0.007333
0.003524
0.000821
0.000432
0.000167
0.000017
0
0
0
0
Regular Expressions for Linear Sequential Circuits
The Vienna Definition Language
General formulation of formal grammars By extracting the basic properties common to the formal grammars appeared in existing literatures, we develop a general formulation of formal grammars. We define a pseudo grammar and derive from it the well-known probabilistic, fuzzy grammars and so on. Moreover, several interesting grammars such as ⊔∗ grammars, ⊔ ⊓ grammars, ⊔ ⊓ grammars, composite B-fuzzy grammars, and mixed fuzzy grammars, which have never appeared in any other papers before, are derived.
Matrix Equations and Normal Forms for Context-Free Grammars The relationship between the set of productions of a context-free grammar and the corresponding set of defining equations is first pointed out. The closure operation on a matrix of strings is defined and this concept is used to formalize the solution to a set of linear equations. A procedure is then given for rewriting a context-free grammar in Greibach normal form, where the replacements string of each production begins with a terminal symbol. An additional procedure is given for rewriting the grammar so that each replacement string both begins and ends with a terminal symbol. Neither procedure requires the evaluation of regular begins and ends with a terminal symbol. Neither procedure requires the evaluation of regular expressions over the total vocabulary of the grammar, as is required by Greibach's procedure.
Fuzzy Algorithms
A Definition of a Nonprobabilistic Entropy in the Setting of Fuzzy Sets Theory
Dynamic system modeling using a recurrent interval-valued fuzzy neural network and its hardware implementation This paper first proposes a new recurrent interval-valued fuzzy neural network (RIFNN) for dynamic system modeling. A new hardware implementation technique for the RIFNN using a field-programmable gate array (FPGA) chip is then proposed. The antecedent and consequent parts in an RIFNN use interval-valued fuzzy sets in order to increase the network noise resistance ability. A new recurrent structure is proposed in RIFNN, with the recurrent loops enabling it to handle dynamic system processing problems. An RIFNN is constructed from structure and parameter learning. For hardware implementation of the RIFNN, the pipeline technique and a new circuit for type-reduction operation are proposed to improve the chip performance. Simulations and comparisons with various feedforward and recurrent fuzzy neural networks verify the performance of the RIFNN under noisy conditions.
Development of a type-2 fuzzy proportional controller Studies have shown that PID controllers can be realized by type-1 (conventional) fuzzy logic systems (FLSs). However, the input-output mappings of such fuzzy PID controllers are fixed. The control performance would, therefore, vary if the system parameters are uncertain. This paper aims at developing a type-2 FLS to control a process whose parameters are uncertain. A method for designing type-2 triangular membership functions with the desired generalized centroid is first proposed. By using this type-2 fuzzy set to partition the output domain, a type-2 fuzzy proportional controller is obtained. It is shown that the type-2 fuzzy logic system is equivalent to a proportional controller that may assume a range of gains. Simulation results are presented to demonstrate that the performance of the proposed controller can be maintained even when the system parameters deviate from their nominal values.
A hybrid multi-criteria decision-making model for firms competence evaluation In this paper, we present a hybrid multi-criteria decision-making (MCDM) model to evaluate the competence of the firms. According to the competence-based theory reveals that firm competencies are recognized from exclusive and unique capabilities that each firm enjoy in marketplace and are tightly intertwined within different business functions throughout the company. Therefore, competence in the firm is a composite of various attributes. Among them many intangible and tangible attributes are difficult to measure. In order to overcome the issue, we invite fuzzy set theory into the measurement of performance. In this paper first we calculate the weight of each criterion through adaptive analytic hierarchy process (AHP) approach (A^3) method, and then we appraise the performance of firms via linguistic variables which are expressed as trapezoidal fuzzy numbers. In the next step we transform these fuzzy numbers into interval data by means of @a-cut. Then considering different values for @a we rank the firms through TOPSIS method with interval data. Since there are different ranks for different @a values, we apply linear assignment method to obtain final rank for alternatives.
Fuzzy decision making with immediate probabilities We developed a new decision-making model with probabilistic information and used the concept of the immediate probability to aggregate the information. This type of probability modifies the objective probability by introducing the attitudinal character of the decision maker. In doing so, we use the ordered weighting average (OWA) operator. When using this model, it is assumed that the information is given by exact numbers. However, this may not be the real situation found within the decision-making problem. Sometimes, the information is vague or imprecise and it is necessary to use another approach to assess the information, such as the use of fuzzy numbers. Then, the decision-making problem can be represented more completely because we now consider the best and worst possible scenarios, along with the possibility that some intermediate event (an internal value) will occur. We will use the fuzzy ordered weighted averaging (FOWA) operator to aggregate the information with the probabilities. As a result, we will get the Immediate Probability-FOWA (IP-FOWA) operator. We will study some of its main properties. We will apply the new approach in a decision-making problem about selection of strategies.
The algebra of fuzzy truth values The purpose of this paper is to give a straightforward mathematical treatment of algebras of fuzzy truth values for type-2 fuzzy sets.
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
A fuzzy logic system for the detection and recognition of handwritten street numbers Fuzzy logic is applied to the problem of locating and reading street numbers in digital images of handwritten mail. A fuzzy rule-based system is defined that uses uncertain information provided by image processing and neural network-based character recognition modules to generate multiple hypotheses with associated confidence values for the location of the street number in an image of a handwritten address. The results of a blind test of the resultant system are presented to demonstrate the value of this new approach. The results are compared to those obtained using a neural network trained with backpropagation. The fuzzy logic system achieved higher performance rates
A possibilistic approach to the modeling and resolution of uncertain closed-loop logistics Closed-loop logistics planning is an important tactic for the achievement of sustainable development. However, the correlation among the demand, recovery, and landfilling makes the estimation of their rates uncertain and difficult. Although the fuzzy numbers can present such kinds of overlapping phenomena, the conventional method of defuzzification using level-cut methods could result in the loss of information. To retain complete information, the possibilistic approach is adopted to obtain the possibilistic mean and mean square imprecision index (MSII) of the shortage and surplus for uncertain factors. By applying the possibilistic approach, a multi-objective, closed-loop logistics model considering shortage and surplus is formulated. The two objectives are to reduce both the total cost and the root MSII. Then, a non-dominated solution can be obtained to support decisions with lower perturbation and cost. Also, the information on prediction interval can be obtained from the possibilistic mean and root MSII to support the decisions in the uncertain environment. This problem is non-deterministic polynomial-time hard, so a new algorithm based on the spanning tree-based genetic algorithm has been developed. Numerical experiments have shown that the proposed algorithm can yield comparatively efficient and accurate results.
1.200022
0.200022
0.200022
0.200022
0.066689
0.006263
0.000033
0.000026
0.000023
0.000019
0.000014
0
0
0
Linguistic modeling by hierarchical systems of linguistic rules In this paper, we propose an approach to design linguistic models which are accurate to a high degree and may be suitably interpreted. This approach is based on the development of a hierarchical system of linguistic rules learning methodology. This methodology has been thought as a refinement of simple linguistic models which, preserving their descriptive power, introduces small changes to increase their accuracy. To do so, we extend the structure of the knowledge base of fuzzy rule base systems in a hierarchical way, in order to make it more flexible. This flexibilization will allow us to have linguistic rules defined over linguistic partitions with different granularity levels, and thus to improve the modeling of those problem subspaces where the former models have bad performance
An Integrated Methodology Using Linguistic Promethee And Maximum Deviation Method For Third-Party Logistics Supplier Selection The purpose of this paper is to present a framework and a suitable method for selecting the best logistics supplier. In general, many quantitative and qualitative criteria should be considered simultaneously for making the decision of logistics supplier selection. The information about judging the performance of logistics suppliers will come from customers' opinions, experts' opinions and the operational data in the real environment. Under this situation, the selection problem of logistic suppliers will be the uncertainties and fuzziness problems in the decision making process. Therefore, we combined the linguistic PROMETHEE method with maximum deviation method to determine the ranking order of logistics suppliers. And then, an example is implemented to demonstrate the practicability of the proposed method. Finally, some conclusions are discussed at the end of this paper.
Some relationships between fuzzy and random set-based classifiers and models When designing rule-based models and classifiers, some precision is sacrificed to obtain linguistic interpretability. Understandable models are not expected to outperform black boxes, but usually fuzzy learning algorithms are statistically validated by contrasting them with black-box models. Unless performance of both approaches is equivalent, it is difficult to judge whether the fuzzy one is doing its best, because the precision gap between the best understandable model and the best black-box model is not known.In this paper we discuss how to generate probabilistic rule-based models and classifiers with the same structure as fuzzy rule-based ones. Fuzzy models, in which features are partitioned into linguistic terms, will be compared to probabilistic rule-based models with the same number of terms in every linguistic partition. We propose to use these probabilistic models to estimate a lower precision limit which fuzzy rule learning algorithms should surpass.
Enhanced interval type-2 fuzzy c-means algorithm with improved initial center Uncertainties are common in the applications like pattern recognition, image processing, etc., while FCM algorithm is widely employed in such applications. However, FCM is not quite efficient to handle the uncertainties well. Interval type-2 fuzzy theory has been incorporated into FCM to improve the ability for handling uncertainties of these algorithms, but the complexity of algorithm will increase accordingly. In this paper an enhanced interval type-2 FCM algorithm is proposed in order to reduce these shortfalls. The initialization of cluster center and the process of type-reduction are optimized in this algorithm, which greatly reduce the calculation time of interval type-2 FCM and accelerate the convergence of the algorithm. Many simulations have been performed on random data clustering and image segmentation to show the validity of our proposed algorithm.
Tuning The Matching Function For A Threshold Weighting Semantics In A Linguistic Information Retrieval System Information retrieval is an activity that attempts to produce documents that better fulfill user information needs. To achieve this activity an information retrieval system uses matching functions that specify the degree of relevance of a document with respect to a user query. Assuming linguistic-weighted queries we present a new linguistic matching function for a threshold weighting semantics that is defined using a 2-tuple fuzzy linguistic approach (Herrera F, Martinez L. IEEE Trans Fuzzy Syst 2000;8:746-752). This new 2-tuple linguistic matching function can be interpreted as a tuning of that defined in "Modelling the Retrieval Process for an Information Retrieval System Using an Ordinal Fuzzy Linguistic Approach" (Heffera-Viedma E. J Am Soc Inform Sci Technol 2001;52:460-475). We show that it simplifies the processes of computing in the retrieval activity, avoids the loss of precision in final results, and, consequently, can help to improve the users' satisfaction. (c) 2005 Wiley Periodicals, Inc.
Predicting correlations properties of crude oil systems using type-2 fuzzy logic systems This paper presented a new prediction model of pressure-volume-temperature (PVT) properties of crude oil systems using type-2 fuzzy logic systems. PVT properties are very important in the reservoir engineering computations, and its accurate determination is important in the primary and subsequent development of an oil field. Earlier developed models are confronted with several limitations especially in uncertain situations coupled with their characteristics instability during predictions. In this work, a type-2 fuzzy logic based model is presented to improve PVT predictions. In the formulation used, the value of a membership function corresponding to a particular PVT properties value is no longer a crisp value; rather, it is associated with a range of values that can be characterized by a function that reflects the level of uncertainty. In this way, the model will be able to adequately model PVT properties. Comparative studies have been carried out and empirical results show that Type-2 FLS approach outperforms others in general and particularly in the area of stability, consistency and the ability to adequately handle uncertainties. Another unique advantage of the newly proposed model is its ability to generate, in addition to the normal target forecast, prediction intervals without extra computational cost.
A new evaluation model for intellectual capital based on computing with linguistic variable In a knowledge era, intellectual capital has become a determinant resource for enterprise to retain and improve competitive advantage. Because the nature of intellectual capital is abstract, intangible, and difficult to measure, it becomes a challenge for business managers to evaluate intellectual capital performance effectively. Recently, several methods have been proposed to assist business managers in evaluating performance of intellectual capital. However, they also face information loss problems while the processes of subjective evaluation integration. Therefore, this paper proposes a suitable model for intellectual capital performance evaluation by combining 2-tuple fuzzy linguistic approach with multiple criteria decision-making (MCDM) method. It is feasible to manipulate the processes of evaluation integration and avoid the information loss effectively. Based on the proposed model, its feasibility is demonstrated by the result of intellectual capital performance evaluation for a high-technology company in Taiwan.
Modeling the retrieval process for an information retrieval system using an ordinal fuzzy linguistic approach A linguistic model for an Information Retrieval System (IRS) defined using an ordinal fuzzy linguistic approach is proposed. The ordinal fuzzy linguistic approach is presented, and its use for modeling the imprecision and subjectivity that appear in the user-IRS interaction is studied. The user queries and IRS responses are modeled linguistically using the concept of fuzzy linguistic variables. The system accepts Boolean queries whose terms can be weighted simultaneously by means of ordinal linguistic values according to three possible semantics: a symmetrical threshold semantic, a quantitative semantic, and an importance semantic. The first one identifies a new threshold semantic used to express qualitative restrictions on the documents retrieved for a given term. It is monotone increasing in index term weight for the threshold values that are on the right of the mid-value, and decreasing for the threshold values that are on the left of the mid-value. The second one is a new semantic proposal introduced to express quantitative restrictions on the documents retrieved for a term, i.e., restrictions on the number of documents that must be retrieved containing that term. The last one is the usual semantic of relative importance that has an effect when the term is in a Boolean expression. A bottom-up evaluation mechanism of queries is presented that coherently integrates the use of the three semantics and satisfies the separability property. The advantage of this IRS with respect to others is that users can express linguistically different semantic restrictions on the desired documents simultaneously, incorporating more flexibility in the user-IRS interaction.
Group decision making with linguistic preference relations with application to supplier selection Linguistic preference relation is a useful tool for expressing preferences of decision makers in group decision making according to linguistic scales. But in the real decision problems, there usually exist interactive phenomena among the preference of decision makers, which makes it difficult to aggregate preference information by conventional additive aggregation operators. Thus, to approximate the human subjective preference evaluation process, it would be more suitable to apply non-additive measures tool without assuming additivity and independence. In this paper, based on @l-fuzzy measure, we consider dependence among subjective preference of decision makers to develop some new linguistic aggregation operators such as linguistic ordered geometric averaging operator and extended linguistic Choquet integral operator to aggregate the multiplicative linguistic preference relations and additive linguistic preference relations, respectively. Further, the procedure and algorithm of group decision making based on these new linguistic aggregation operators and linguistic preference relations are given. Finally, a supplier selection example is provided to illustrate the developed approaches.
Modeling uncertainty in clinical diagnosis using fuzzy logic. This paper describes a fuzzy approach to computer-aided medical diagnosis in a clinical context. It introduces a formal view of diagnosis in clinical settings and shows the relevance and possible uses of fuzzy cognitive maps. A constraint satisfaction method is introduced that uses the temporal uncertainty in symptom durations that may occur with particular diseases. The method results in an estimate of the stage of the disease if the temporal constraints of the disease in relation to the occurrence of the symptoms are satisfied. A lightweight fuzzy process is described and evaluated in the context of diagnosis of two confusable diseases. The process is based on the idea of an incremental simple additive model for fuzzy sets supporting and negating particular diseases. These are combined to produce an index of support for a particular disease. The process is developed to allow fuzzy symptom information on the intensity and duration of symptoms. Results are presented showing the effectiveness of the method for supporting differential diagnosis.
Some aspects of intuitionistic fuzzy sets We first discuss the significant role that duality plays in many aggregation operations involving intuitionistic fuzzy subsets. We then consider the extension to intuitionistic fuzzy subsets of a number of ideas from standard fuzzy subsets. In particular we look at the measure of specificity. We also look at the problem of alternative selection when decision criteria satisfaction is expressed using intuitionistic fuzzy subsets. We introduce a decision paradigm called the method of least commitment. We briefly look at the problem of defuzzification of intuitionistic fuzzy subsets.
Beyond streams and graphs: dynamic tensor analysis How do we find patterns in author-keyword associations, evolving over time? Or in Data Cubes, with product-branch-customer sales information? Matrix decompositions, like principal component analysis (PCA) and variants, are invaluable tools for mining, dimensionality reduction, feature selection, rule identification in numerous settings like streaming data, text, graphs, social networks and many more. However, they have only two orders, like author and keyword, in the above example.We propose to envision such higher order data as tensors,and tap the vast literature on the topic. However, these methods do not necessarily scale up, let alone operate on semi-infinite streams. Thus, we introduce the dynamic tensor analysis (DTA) method, and its variants. DTA provides a compact summary for high-order and high-dimensional data, and it also reveals the hidden correlations. Algorithmically, we designed DTA very carefully so that it is (a) scalable, (b) space efficient (it does not need to store the past) and (c) fully automatic with no need for user defined parameters. Moreover, we propose STA, a streaming tensor analysis method, which provides a fast, streaming approximation to DTA.We implemented all our methods, and applied them in two real settings, namely, anomaly detection and multi-way latent semantic indexing. We used two real, large datasets, one on network flow data (100GB over 1 month) and one from DBLP (200MB over 25 years). Our experiments show that our methods are fast, accurate and that they find interesting patterns and outliers on the real datasets.
Merging distributed database summaries The database summarization system coined SaintEtiQ provides multi-resolution summaries of structured data stored into acentralized database. Summaries are computed online with a conceptual hierarchical clustering algorithm. However, most companies work in distributed legacy environments and consequently the current centralized version of SaintEtiQ is either not feasible (privacy preserving) or not desirable (resource limitations). To address this problem, we propose new algorithms to generate a single summary hierarchy given two distinct hierarchies, without scanning the raw data. The Greedy Merging Algorithm (GMA) takes all leaves of both hierarchies and generates the optimal partitioning for the considered data set with regards to a cost function (compactness and separation). Then, a hierarchical organization of summaries is built by agglomerating or dividing clusters such that the cost function may emphasize local or global patterns in the data. Thus, we obtain two different hierarchies according to the performed optimisation. However, this approach breaks down due to its exponential time complexity. Two alternative approaches with constant time complexity w.r.t. the number of data items, are proposed to tackle this problem. The first one, called Merge by Incorporation Algorithm (MIA), relies on the SaintEtiQ engine whereas the second approach, named Merge by Alignment Algorithm (MAA), consists in rearranging summaries by levels in a top-down manner. Then, we compare those approaches using an original quality measure in order to quantify how good our merged hierarchies are. Finally, an experimental study, using real data sets, shows that merging processes (MIA and MAA) are efficient in terms of computational time.
Robust LMIs with polynomial dependence on the uncertainty Solving robust linear matrix inequalities (LMIs) has long been recognized as an important problem in robust control. Although the solution to this problem is well-known for the case of affine dependence on the uncertainty, to the best of our knowledge, results for other types of dependence are limited. In this paper we address the the problem of solving robust LMIs for the case of polynomial dependence on the uncertainty. More precisely, results from numerical integration of polynomial functions are used to develop procedures to minimize the volume of the set of uncertain parameters for which the LMI condition is violated.
1.012052
0.01077
0.010526
0.010526
0.005553
0.003509
0.001971
0.001033
0.000123
0.000032
0.000001
0
0
0
Spectral Polynomial Chaos Solutions of the Stochastic Advection Equation We present a new algorithm based on Wiener–Hermite functionals combined with Fourier collocation to solve the advection equation with stochastic transport velocity. We develop different stategies of representing the stochastic input, and demonstrate that this approach is orders of magnitude more efficient than Monte Carlo simulations for comparable accuracy.
Equation-Free, Multiscale Computation for Unsteady Random Diffusion We present an "equation-free" multiscale approach to the simulation of unsteady diffusion in a random medium. The diffusivity of the medium is modeled as a random field with short correlation length, and the governing equations are cast in the form of stochastic differential equations. A detailed fine-scale computation of such a problem requires discretization and solution of a large system of equations and can be prohibitively time consuming. To circumvent this difficulty, we propose an equation-free approach, where the fine-scale computation is conducted only for a (small) fraction of the overall time. The evolution of a set of appropriately defined coarse-grained variables (observables) is evaluated during the fine-scale computation, and "projective integration" is used to accelerate the integration. The choice of these coarse variables is an important part of the approach: they are the coefficients of pointwise polynomial expansions of the random solutions. Such a choice of coarse variables allows us to reconstruct representative ensembles of fine-scale solutions with "correct" correlation structures, which is a key to algorithm efficiency. Numerical examples demonstrating accuracy and efficiency of the approach are presented.
Numerical studies of the stochastic Korteweg-de Vries equation We present numerical solutions of the stochastic Korteweg-de Vries equation for three cases corresponding to additive time-dependent noise, multiplicative space-dependent noise and a combination of the two. We employ polynomial chaos for discretization in random space, and discontinuous Galerkin and finite difference for discretization in physical space. The accuracy of the stochastic solutions is investigated by comparing the first two moments against analytical and Monte Carlo simulation results. Of particular interest is the interplay of spatial discretization error with the stochastic approximation error, which is examined for different orders of spatial and stochastic approximation.
Future performance challenges in nanometer design We highlight several fundamental challenges to designing high-performance integrated circuits in nanometer-scale technologies (i.e. draRita Glover, EDA Today, L.C.wn feature sizes
A convergence study for SPDEs using combined Polynomial Chaos and Dynamically-Orthogonal schemes. We study the convergence properties of the recently developed Dynamically Orthogonal (DO) field equations [1] in comparison with the Polynomial Chaos (PC) method. To this end, we consider a series of one-dimensional prototype SPDEs, whose solution can be expressed analytically, and which are associated with both linear (advection equation) and nonlinear (Burgers equation) problems with excitations that lead to unimodal and strongly bi-modal distributions. We also propose a hybrid approach to tackle the singular limit of the DO equations for the case of deterministic initial conditions. The results reveal that the DO method converges exponentially fast with respect to the number of modes (for the problems considered) giving same levels of computational accuracy comparable with the PC method but (in many cases) with substantially smaller computational cost compared to stochastic collocation, especially when the involved parametric space is high-dimensional.
From blind certainty to informed uncertainty The accuracy, computational efficiency, and reliability of static timing analysis have made it the workhorse for verifying the timing of synchronous digital integrated circuits for more than a decade. In this paper we charge that the traditional deterministic approach to analyzing the timing of circuits is significantly undermining its accuracy and may even challenge its reliability. We argue that computation of the static timing of a circuit requires a dramatic rethinking in order to continue serving its role as an enabler of high-performance designs. More fundamentally we believe that for circuits to be reliably designed the underlying probabilistic effects must be brought to the forefront of design and no longer hidden under conservative approximations. The reasons that justify such a radical transition are presented together with directions for solutions.
The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L^2 error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.
Death, taxes and failing chips In the way they cope with variability, present-day methodologies are onerous, pessimistic and risky, all at the same time! Dealing with variability is an increasingly important aspect of high-performance digital integrated circuit design, and indispensable for first-time-right hardware and cutting-edge performance. This invited paper discusses the methodology, analysis, synthesis and modeling aspects of this problem. These aspects of the problem are compared and contrasted in the ASIC and custom (microprocessor) domains. This paper pays particular attention to statistical timing analysis and enumerates desirable attributes that would render such an analysis capability practical and accurate.
Stochastic integral equation solver for efficient variation-aware interconnect extraction In this paper we present an efficient algorithm for extracting the complete statistical distribution of the input impedance of interconnect structures in the presence of a large number of random geometrical variations. The main contribution in this paper is the development of a new algorithm, which combines both Neumann expansion and Hermite expansion, to accurately and efficiently solve stochastic linear system of equations. The second contribution is a new theorem to efficiently obtain the coefficients of the Hermite expansion while computing only low order integrals. We establish the accuracy of the proposed algorithm by solving stochastic linear systems resulting from the discretization of the stochastic volume integral equation and comparing our results to those obtained from other techniques available in the literature, such as Monte Carlo and stochastic finite element analysis. We further prove the computational efficiency of our algorithm by solving large problems that are not solvable using the current state of the art.
Sparse Tensor Discretization of Elliptic sPDEs We propose and analyze sparse deterministic-stochastic tensor Galerkin finite element methods (sparse sGFEMs) for the numerical solution of elliptic partial differential equations (PDEs) with random coefficients in a physical domain $D\subset\mathbb{R}^d$. In tensor product sGFEMs, the variational solution to the boundary value problem is approximated in tensor product finite element spaces $V^\Gamma\otimes V^D$, where $V^\Gamma$ and $V^D$ denote suitable finite dimensional subspaces of the stochastic and deterministic function spaces, respectively. These approaches lead to sGFEM algorithms of complexity $O(N_\Gamma N_D)$, where $N_\Gamma=\dim V^\Gamma$ and $N_D=\dim V^D$. In this work, we use hierarchic sequences $V^\Gamma_1\subset V^\Gamma_2\subset\ldots$ and $V^D_1\subset V^D_2\subset\ldots$ of finite dimensional spaces to approximate the law of the random solution. The hierarchies of approximation spaces allow us to define sparse tensor product spaces $V^\Gamma_\ell\hat{\otimes}V^D_\ell$, $\ell=1,2,\dots$, yielding algorithms of $O(N_\Gamma\log N_D+N_D\log N_\Gamma)$ work and memory. We estimate the convergence rate of sGFEM for algebraic decay of the input random field Karhunen-Loève coefficients. We give an algorithm for an input adapted a-priori selection of deterministic and stochastic discretization spaces. The convergence rate in terms of the total number of degrees of freedom of the proposed method is superior to Monte Carlo approximations. Numerical examples illustrate the theoretical results and demonstrate superiority of the sparse tensor product discretization proposed here versus the full tensor product approach.
On sparse representations in arbitrary redundant bases The purpose of this contribution is to generalize some recent results on sparse representations of signals in redundant bases. The question that is considered is the following: given a matrix A of dimension (n,m) with mn and a vector b=Ax, find a sufficient condition for b to have a unique sparsest representation x as a linear combination of columns of A. Answers to this question are known when A is the concatenation of two unitary matrices and either an extensive combinatorial search is performed or a linear program is solved. We consider arbitrary A matrices and give a sufficient condition for the unique sparsest solution to be the unique solution to both a linear program or a parametrized quadratic program. The proof is elementary and the possibility of using a quadratic program opens perspectives to the case where b=Ax+e with e a vector of noise or modeling errors.
Dictionary identifiability from few training samples This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1 minimisation. The problem is to identify a dictionary Φ from a set of training samples Y knowing that Y = ΦX for some coefficient matrix X. Using a characterisation of coefficient matrices X that allow to recover any orthonormal basis (ONB) as a local minimum of an ℓ1 minimisation problem, it is shown that certain types of sparse random coefficient matrices will ensure local identifiability of the ONB with high probability, for a number of training samples which essentially grows linearly with the signal dimension.
Exact inversion of decomposable interval type-2 fuzzy logic systems It has been demonstrated that type-2 fuzzy logic systems are much more powerful tools than ordinary (type-1) fuzzy logic systems to represent highly nonlinear and/or uncertain systems. As a consequence, type-2 fuzzy logic systems have been applied in various areas especially in control system design and modelling. In this study, an exact inversion methodology is developed for decomposable interval type-2 fuzzy logic system. In this context, the decomposition property is extended and generalized to interval type-2 fuzzy logic sets. Based on this property, the interval type-2 fuzzy logic system is decomposed into several interval type-2 fuzzy logic subsystems under a certain condition on the input space of the fuzzy logic system. Then, the analytical formulation of the inverse interval type-2 fuzzy logic subsystem output is explicitly driven for certain switching points of the Karnik-Mendel type reduction method. The proposed exact inversion methodology driven for the interval type-2 fuzzy logic subsystem is generalized to the overall interval type-2 fuzzy logic system via the decomposition property. In order to demonstrate the feasibility of the proposed methodology, a simulation study is given where the beneficial sides of the proposed exact inversion methodology are shown clearly.
Robust LMIs with polynomial dependence on the uncertainty Solving robust linear matrix inequalities (LMIs) has long been recognized as an important problem in robust control. Although the solution to this problem is well-known for the case of affine dependence on the uncertainty, to the best of our knowledge, results for other types of dependence are limited. In this paper we address the the problem of solving robust LMIs for the case of polynomial dependence on the uncertainty. More precisely, results from numerical integration of polynomial functions are used to develop procedures to minimize the volume of the set of uncertain parameters for which the LMI condition is violated.
1.013823
0.01212
0.011977
0.011832
0.006363
0.003944
0.001523
0.00034
0.000054
0.000017
0.000002
0
0
0
OTT-ISP joint service management: A Customer Lifetime Value based approach. In this work, we propose a QoE-aware collaboration approach between Over-The-Top providers (OTT) and Internet Service Providers (ISP) based on the maximization of the profit by considering the user churn of Most Profitable Customers (MPCs), which are classified in terms of the Customer Lifetime Value (CLV). The contribution of this work is multifold. Firstly, we investigate the different perspectives of ISPs and OTTs regarding QoE management and why they should collaborate. Secondly, we investigate the current ongoing collaboration scenarios in the multimedia industry. Thirdly, we propose the QoE-aware collaboration framework based on the CLV, which includes the interfaces for information sharing between OTTs and ISPs and the use of Content Delivery Networks (CDN) and surrogate servers. Finally, we provide simulation results aiming at demonstrating the higher profit is achieved when collaboration is introduced, by engaging more MPCs with respect to current solutions.
Qualia: A Multilayer Solution For Qoe Passive Monitoring At The User Terminal This paper focuses on passive Quality of Experience (QoE) monitoring at user end devices as a necessary activity of the ISP (Internet Service Provider) for an effective quality-based service delivery. The contribution of the work is threefold. Firstly, we highlight the opportunities and challenges for the QoE monitoring of the Over-The-Top (OTT) applications while investigating the available interfaces for monitoring the deployed applications at the end-device. Secondly, we propose a multilayer passive QoE monitor for OTT applications at the user terminal with ISPs prospect. Five layers are considered: user profile, context, resource, application and network layers. Thirdly, we consider YouTube as a case study for OTT video streaming applications in our experiments for analyzing the impact of the monitoring cycle on the user end device resources, such as the battery, RAM and CPU utilization at end user device.
Ant colony optimization for QoE-centric flow routing in software-defined networks We present design, implementation, and an evaluation of an ant colony optimization (ACO) approach to flow routing in software-defined networking (SDN) environments. While exploiting a global network view and configuration flexibility provided by SDN, the approach also utilizes quality of experience (QoE) estimation models and seeks to maximize the user QoE for multimedia services. As network metrics (e.g., packet loss) influence QoE for such services differently, based on the service type and its integral media flows, the goal of our ACO-based heuristic algorithm is to calculate QoE-aware paths that conform to traffic demands and network limitations. A Java implementation of the algorithm is integrated into SDN controller OpenDaylight so as to program the path selections. The evaluation results indicate promising QoE improvements of our approach over shortest path routing, as well as low running time.
Cross-layer QoE-driven admission control and resource allocation for adaptive multimedia services in LTE. This paper proposes novel resource management mechanisms for multimedia services in 3GPP Long Term Evolution (LTE) networks aimed at enhancing session establishment success and network resources management, while maintaining acceptable end-user quality of experience (QoE) levels. We focus on two aspects, namely admission control mechanisms and resource allocation. Our cross-layer approach relies on application-level user- and service-related knowledge exchanged at session initiation time, whereby different feasible service configurations corresponding to different quality levels and resource requirements can be negotiated and passed on to network-level resource management mechanisms. We propose an admission control algorithm which admits sessions by considering multiple feasible configurations of a given service, and compare it with a baseline algorithm that considers only single service configurations, which is further related to other state-of-the-art algorithms. Our results show that admission probability can be increased in light of admitting less resource-demanding configurations in cases where resource restrictions prevent admission of a session at the highest quality level. Additionally, in case of reduced resource availability, we consider resource reallocation mechanisms based on controlled session quality degradation while maintaining user QoE above the acceptable threshold. Simulation results have shown that given a wireless access network with limited resources, our approach leads to increased session establishment success (i.e., fewer sessions are blocked) while maintaining acceptable user-perceived quality levels.
IP-Based Mobile and Fixed Network Audiovisual Media Services. This article provides a tutorial overview of current approaches for monitoring the quality perceived by users of IP-based audiovisual media services. The article addresses both mobile and fixed network services such as mobile TV or Internet Protocol TV (IPTV). It reviews the different quality models that exploit packet- header-, bit stream-, or signal-information for providing audio, video, and au...
Mobile quality of experience: Recent advances and challenges Quality of Experience (QoE) is important from both a user perspective, since it assesses the quality a user actually experiences, and a network perspective, since it is important for a provider to dimension its network to support the necessary QoE. This paper presents some recent advances on the modeling and measurement of QoE with an emphasis on mobile networks. It also identifies key challenges for mobile QoE.
Towards Still Image Experience Predictions In Augmented Vision Settings With the emergence of Augmented Reality (AR) services in a broad range of application scenarios, the interplay of (compressed) content as delivered and quality as experienced by the user becomes increasingly important. While significant research efforts have uncovered interplays for traditional (opaque) media consumption scenarios, applications in augmented vision scenarios pose significant challenges. This paper focuses on the Quality of Experience (QoE) for a grounded truth reference still image set. We corroborate previous findings which indicate that there is only limited QoE benefit obtainable from very high image quality levels. The main contribution is the first evaluation of several popular objective image quality metrics and their relationships to the QoE in opaque and vision augmenting presentation formats. For the first time, we provide an assessment of the possibility to predict the QoE based on these metrics. We find that linear regression of a small degree is able to accurately capture the coefficients and fairly accurate prediction can be performed even with a non-referential image quality metric as parameter. We note, however, that prediction accuracy still fluctuates with the image content and subjects removed from the data set. Our overall findings can be employed by AR service providers to perform an optimized delivery of still image content.
QoE in 10 seconds: Are short video clip lengths sufficient for Quality of Experience assessment? Standard methodologies for subjective video quality testing are based on very short test clips of 10 seconds. But is this duration sufficient for Quality of Experience assessment? In this paper, we present the results of a comparative user study that tests whether quality perception and rating behavior may be different if video clip durations are longer. We did not find strong overall MOS differences between clip durations, but the three longer clips (60, 120 and 240 seconds) were rated slightly more positively than the three shorter durations under comparison (10, 15 and 30 seconds). This difference was most apparent when high quality videos were presented. However, we did not find an interaction between content class and the duration effect itself. Furthermore, methodological implications of these results are discussed.
MOS-based multiuser multiapplication cross-layer optimization for mobile multimedia communication We propose a cross-layer optimization strategy that jointly optimizes the application layer, the data-link layer, and the physical layer of a wireless protocol stack using an application-oriented objective function. The cross-layer optimization framework provides efficient allocation of wireless network resources across multiple types of applications run by different users to maximize network resource usage and user perceived quality of service. We define a novel optimization scheme based on the mean opinion score (MOS) as the unifying metric over different application classes. Our experiments, applied to scenarios where users simultaneously run three types of applications, namely voice communication, streaming video and file download, confirm that MOS-based optimization leads to significant improvement in terms of user perceived quality when compared to conventional throughput-based optimization.
Statistical timing analysis for intra-die process variations with spatial correlations Process variations have become a critical issue in performance verification of high-performance designs. We present a new, statistical timing analysis method that accounts for inter- and intra-die process variations and their spatial correlations. Since statistical timing analysis has an exponential run time complexity, we propose a method whereby a statistical bound on the probability distribution function of the exact circuit delay is computed with linear run time. First, we develop a model for representing inter- and intra-die variations and their spatial correlations. Using this model, we then show how gate delays and arrival times can be represented as a sum of components, such that the correlation information between arrival times and gate delays is preserved. We then show how arrival times are propagated and merged in the circuit to obtain an arrival time distribution that is an upper bound on the distribution of the exact circuit delay. We prove the correctness of the bound and also show how the bound can be improved by propagating multiple arrival times. The proposed algorithms were implemented and tested on a set of benchmark circuits under several process variation scenarios. The results were compared with Monte Carlo simulation and show an accuracy of 3.32% on average over all test cases.
Selecting Access Network for BYOD Enterprises with Business Context (eBC) and Enterprise-Centric ANDSF.
Project selection for oil-fields development by using the AHP and fuzzy TOPSIS methods The evaluation and selection of projects before investment decision is customarily done using, technical and information. In this paper, proposed a new methodology to provide a simple approach to assess alternative projects and help the decision-maker to select the best one for National Iranian Oil Company by using six criteria of comparing investment alternatives as criteria in an AHP and fuzzy TOPSIS techniques. The AHP is used to analyze the structure of the project selection problem and to determine weights of the criteria, and fuzzy TOPSIS method is used to obtain final ranking. This application is conducted to illustrate the utilization of the model for the project selection problems. Additionally, in the application, it is shown that calculation of the criteria weights is important in fuzzy TOPSIS method and they could change the ranking. The decision-maker can use these different weight combinations in the decision-making process according to priority.
Upper and lower values for the level of fuzziness in FCM The level of fuzziness is a parameter in fuzzy system modeling which is a source of uncertainty. In order to explore the effect of this uncertainty, one needs to investigate and identify effective upper and lower boundaries of the level of fuzziness. For this purpose, Fuzzy c-means (FCM) clustering methodology is investigated to determine the effective upper and lower boundaries of the level of fuzziness in order to capture the uncertainty generated by this parameter. In this regard, we propose to expand the membership function around important information points of FCM. These important information points are, cluster centers and the mass center. At these points, it is known that, the level of fuzziness has no effect on the membership values. In this way, we identify the counter-intuitive behavior of membership function near these particular information points. It will be shown that the upper and lower values of the level of fuzziness can be identified. Hence the uncertainty generated by this parameter can be encapsulated.
Generalised Interval-Valued Fuzzy Soft Set. We introduce the concept of generalised interval-valued fuzzy soft set and its operations and study some of their properties. We give applications of this theory in solving a decision making problem. We also introduce a similarity measure of two generalised interval-valued fuzzy soft sets and discuss its application in a medical diagnosis problem: fuzzy set; soft set; fuzzy soft set; generalised fuzzy soft set; generalised interval-valued fuzzy soft set; interval-valued fuzzy set; interval-valued fuzzy soft set.
1.11025
0.11025
0.11
0.055
0.036935
0.011
0.004778
0.000922
0.000028
0
0
0
0
0
Non-gaussian statistical parameter modeling for SSTA with confidence interval analysis Most of the existing statistical static timing analysis (SSTA) algorithms assume that the process parameters of have been given with 100% confidence level or zero errors and are preferable Gaussian distributions. These assumptions are actually quite questionable and require careful attention.In this paper, we aim at providing solid statistical analysis methods to analyze the measurement data on testing chips and extract the statistical distribution, either Gaussian or non-Gaussian which could be used in advanced SSTA algorithms for confidence interval or error bound information.Two contributions are achieved by this paper. First, we develop a moment matching based quadratic function modeling method to fit the first three moments of given measurement data in plain form which may not follow Gaussian distributions. Second, we provide a systematic way to analyze the confident intervals on our modeling strategies. The confidence intervals analysis gives the solid guidelines for testing chip data collections. Extensive experimental results demonstrate the accuracy of our algorithm.
On path-based learning and its applications in delay test and diagnosis This paper describes the implementation of a novel path-based learning methodology that can be applied for two purposes: (1) In a pre-silicon simulation environment, path-based learning can be used to produce a fast and approximate simulator for statistical timing simulation. (2) In post-silicon phase, path-based learning can be used as a vehicle to derive critical paths based on the pass/fail behavior observed from the test chips. Our path-based learning methodology consists of four major components: a delay test pattern set, a logic simulator, a set of selected paths as the basis for learning, and a machine learner. We explain the key concepts in this methodology and present experimental results to demonstrate its feasibility and applications.
VGTA: Variation Aware Gate Timing Analysis As technology scales down, timing verification of digital integrated circuits becomes an extremely difficult task due to gate and wire variability. Therefore, statistical timing analysis is inevitable. Most timing tools divide the analysis into two parts: 1) interconnect (wire) timing analysis and 2) gate timing analysis. Variational interconnect delay calculation for blockbased TA has been recently studied. However, variational gate delay calculation has remained unexplored. In this paper, we propose a new framework to handle the variation-aware gate timing analysis in block-based TA. First, we present an approach to approximate variational RC- load by using a canonical first-order model. Next, an efficient variation-aware effective capacitance calculation based on statistical input transition, statistical gate timing library, and statistical RC- load is presented. In this step, we use a single-iteration Ceff calculation which is efficient and reasonably accurate. Finally we calculate the statistical gate delay and output slew based on the aforementioned model. Experimental results show an average error of 7% for gate delay and output slew with respect to the HSPICE Monte Carlo simulation while the runtime is about 145 times faster.
A probabilistic analysis of pipelined global interconnect under process variations The main thesis of this paper is to perform a reliability based performance analysis for a shared latch inserted global interconnect under uncertainty. We first put forward a novel delay metric named DMA for estimation of interconnect delay probability density function considering process variations. Without considerable loss in accuracy, DMA can achieve high computational efficiency even in a large space of random variables. We then propose a comprehensive probabilistic methodology for sampling transfers, on a shared latch inserted global interconnect, that highly improves the reliability of the interconnect. Improvements up to 125% are observed in the reliability when compared to deterministic sampling approach. It is also shown that dual phase clocking scheme for pipelined global interconnect is able to meet more stringent timing constraints due to its lower latency.
An exact algorithm for the statistical shortest path problem Graph algorithms are widely used in VLSI CAD. Traditional graph algorithms can handle graphs with deterministic edge weights. As VLSI technology continues to scale into nanometer designs, we need to use probability distributions for edge weights in order to model uncertainty due to parameter variations. In this paper, we consider the statistical shortest path (SSP) problem. Given a graph G, the edge weights of G are random variables. For each path P in G, let LP be its length, which is the sum of all edge weights on P. Clearly LP is a random variable and we let muP, and omegaP 3 be its mean and variance, respectively. In the SSP problem, our goal is to find a path P connecting two given vertices to minimize the cost function mup, + Phi (omegaP 2) where Phi is an arbitrary function. (For example, if Phi (times) equiv the cost function is muP , + 3omegaP.) To minimize uncertainty in the final result, it is meaningful to look for paths with bounded variance, i.e., omegaP 2 les B for a given fixed bound B. In this paper, we present an exact algorithm to solve the SSP problem in O(B(V + E)) time where V and E are the numbers of vertices and edges, respectively, in G. Our algorithm is superior to previous algorithms for SSP problem because we can handle: 1) general graphs (unlike previous works applicable only to directed acyclic graphs), 2) arbitrary edge-weight distributions (unlike previous algorithms designed only for specific distributions such as Gaussian), and 3) general cost function (none of the previous algorithms can even handle the cost function mu P, + 3omegaP. Finally, we discuss applications of the SSP problem to maze routing, buffer insertions, and timing analysis under parameter variations
Statistical timing analysis using levelized covariance propagation Variability in process parameters is making accurate timing analysis of nanoscale integrated circuits an extremely challenging task. In this paper, we propose a new algorithm for statistical timing analysis using levelized covariance propagation (LCP). The algorithm simultaneously considers the impact of random placement of dopants (which makes every transistor in a die independent in terms of threshold voltage) and the spatial correlation of the process parameters such as channel length, transistor width and oxide thickness due to the intra-die variations. It also considers the signal correlation due to reconvergent paths in the circuit. Results on several benchmark circuits in 70 nm technology show an average of 0.21 % and 1.07 % errors in mean and the standard deviation, respectively, in timing analysis using the proposed technique compared to the Monte-Carlo analysis.
Non-linear statistical static timing analysis for non-Gaussian variation sources Existing statistical static timing analysis (SSTA) techniques suffer from limited modeling capability by using a linear delay model with Gaussian distribution, or have scalability problems due to expensive operations involved to handle non-Gaussian variation sources or non-linear delays. To overcome these limitations, we propose a novel SSTA technique to handle both nonlinear delay dependency and non-Gaussian variation sources simultaneously. We develop efficient algorithms to perform all statistical atomic operations (such as max and add) efficiently via either closed-form formulas or one-dimensional lookup tables. The resulting timing quantity provably preserves the correlation with variation sources to the third-order. We prove that the complexity of our algorithm is linear in both variation sources and circuit sizes, hence our algorithm scales well for large designs. Compared to Monte Carlo simulation for non-Gaussian variation sources and nonlinear delay models, our approach predicts all timing characteristics of circuit delay with less than 2% error.
Transistor sizing of custom high-performance digital circuits with parametric yield considerations Transistor sizing is a classic Computer-Aided Design problem that has received much attention in the literature. Due to the increasing importance of process variations in deep sub-micron circuits, nominal circuit tuning is not sufficient, and the sizing problem warrants revisiting. This paper addresses the sizing problem statistically in which transistor sizes are automatically adjusted to maximize parametric yield at a given timing performance, or maximize performance at a required parametric yield. Specifically, we describe an implementation of a statistical tuner using interior point nonlinear optimization with an objective function that is directly dependent on statistical process variation. Our results show that for process variation sensitive circuits, consisting of thousands of independently tunable devices, a statistically aware tuner can give more robust, higher yield solutions when compared to deterministic circuit tuning and is thus an attractive alternative to the Monte Carlo methods that are typically used to size devices in such circuits. To the best of our knowledge, this is the first publication of a working system to optimize device sizes in custom circuits using a process variation aware tuner.
Principle hessian direction based parameter reduction for interconnect networks with process variation As CMOS technology enters the nanometer regime, the increasing process variation is bringing manifest impact on circuit performance. To accurately take account of both global and local process variations, a large number of random variables (or parameters) have to be incorporated into circuit models. This measure in turn raises the complexity of the circuit models. The current paper proposes a Principle Hessian Direction (PHD) based parameter reduction approach for interconnect networks. The proposed approach relies on each parameter's impact on circuit performance to decide whether keeping or reducing the parameter. Compared with existing principle component analysis(PCA) method, this performance based property provides us a significantly smaller parameter set after reduction. The experimental results also support our conclusions. In interconnect cases, the proposed method reduces 70% of parameters. In some cases (the mesh example in the current paper), the new approach leads to an 85% reduction. We also tested ISCAS benchmarks. In all cases, an average of 53% of reductionis observed with less than 3% error in mean and less than 8% error in variation.
SPARE: a scalable algorithm for passive, structure preserving, parameter-aware model order reduction This paper describes a flexible and efficient new algorithm for model order reduction of parameterized systems. The method is based on the reformulation of the parameterized system as a perturbation-like parallel interconnection of the nominal transfer function and the nonparameterized transfer function sensitivities with respect to the parameter variations. Such a formulation reveals an explicit dependence on each parameter which is exploited by reducing each component system independently via a standard nonparameterized structure preserving algorithm. Therefore, the resulting smaller size interconnected system retains the structure of the original system with respect to parameter dependence. This allows for better accuracy control, enabling independent adaptive order determination with respect to each parameter and adding flexibility in simulation environments. It is shown that the method is efficiently scalable and preserves relevant system properties such as passivity. The new technique can handle fairly large parameter variations on systems whose outputs exhibit smooth dependence on the parameters, also allowing design space exploration to some degree. Several examples show that besides the added flexibility and control, when compared with competing algorithms, the proposed technique can, in some cases, produce smaller reduced models with potential accuracy gains.
Sharp thresholds for high-dimensional and noisy recovery of sparsity The problem of consistently estimating the sparsity pattern of a vector β� 2 Rp based on observa- tions contaminated by noise arises in various contexts, including subset selection in regression, structure estima- tion in graphical models, sparse approximation, and sig- nal denoising. Unfortunately, the natural optimization- theoretic formulation involves ℓ0 constraints, which leads to NP-hard problems in general; this intractability mo- tivates the use of relaxations based on ℓ1 constraints. We analyze the behavior of ℓ1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish a sharp relation between the problem di- mension p, the number s of non-zero elements in β�, and the number of observations n that are required for reliable recovery. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we establish existence and compute explicit values of thresh- olds θℓ and θu with the following properties: for any ν > 0, if n > 2 s(θu + ν) log(p s) + s + 1, then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n < 2 s(θℓ ν) log(p s) + s + 1, then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble, we show that θℓ = θu = 1, so that the threshold is sharp and exactly determined.
Sparsity preserving projections with applications to face recognition Dimensionality reduction methods (DRs) have commonly been used as a principled way to understand the high-dimensional data such as face images. In this paper, we propose a new unsupervised DR method called sparsity preserving projections (SPP). Unlike many existing techniques such as local preserving projection (LPP) and neighborhood preserving embedding (NPE), where local neighborhood information is preserved during the DR procedure, SPP aims to preserve the sparse reconstructive relationship of the data, which is achieved by minimizing a L1 regularization-related objective function. The obtained projections are invariant to rotations, rescalings and translations of the data, and more importantly, they contain natural discriminating information even if no class labels are provided. Moreover, SPP chooses its neighborhood automatically and hence can be more conveniently used in practice compared to LPP and NPE. The feasibility and effectiveness of the proposed method is verified on three popular face databases (Yale, AR and Extended Yale B) with promising results.
Compressed sensing of color images This work proposes a method for color imaging via compressive sampling. Random projections from each of the color channels are acquired separately. The problem is to reconstruct the original color image from the randomly projected (sub-sampled) data. Since each of the color channels are sparse in some domain (DCT, Wavelet, etc.) one way to approach the reconstruction problem is to apply sparse optimization algorithms. We note that the color channels are highly correlated and propose an alternative reconstruction method based on group sparse optimization. Two new non-convex group sparse optimization methods are proposed in this work. Experimental results show that incorporating group sparsity into the reconstruction problem produces significant improvement (more than 1dB PSNR) over ordinary sparse algorithm.
Bacterial Community Reconstruction Using A Single Sequencing Reaction Bacteria are the unseen majority on our planet, with millions of species and comprising most of the living protoplasm. While current methods enable in-depth study of a small number of communities, a simple tool for breadth studies of bacterial population composition in a large number of samples is lacking. We propose a novel approach for reconstruction of the composition of an unknown mixture of bacteria using a single Sanger-sequencing reaction of the mixture. This method is based on compressive sensing theory, which deals with reconstruction of a sparse signal using a small number of measurements. Utilizing the fact that in many cases each bacterial community is comprised of a small subset of the known bacterial species, we show the feasibility of this approach for determining the composition of a bacterial mixture. Using simulations, we show that sequencing a few hundred base-pairs of the 16S rRNA gene sequence may provide enough information for reconstruction of mixtures containing tens of species, out of tens of thousands, even in the presence of realistic measurement noise. Finally, we show initial promising results when applying our method for the reconstruction of a toy experimental mixture with five species. Our approach may have a potential for a practical and efficient way for identifying bacterial species compositions in biological samples.
1.030469
0.028959
0.028959
0.028959
0.014594
0.005371
0.002151
0.000315
0.000048
0.000008
0
0
0
0
Neural networks that learn from fuzzy if-then rules An architecture for neural networks that can handle fuzzy input vectors is proposed, and learning algorithms that utilize fuzzy if-then rules as well as numerical data in neural network learning for classification problems and for fuzzy control problems are derived. The learning algorithms can be viewed as an extension of the backpropagation algorithm to the case of fuzzy input vectors and fuzzy target outputs. Using the proposed methods, linguistic knowledge from human experts represented by fuzzy if-then rules and numerical data from measuring instruments can be integrated into a single information processing system (classification system or fuzzy control system). It is shown that the scheme works well for simple examples
Fuzzy logic in control systems: fuzzy logic controller. I.
Modeling and formulating fuzzy knowledge bases using neural networks We show how the determination of the firing level of a neuron can be viewed as a measure of possibility between two fuzzy sets, the weights of connection and the input. We then suggest a way to represent fuzzy production rules in a neural framework. Central to this representation is the notion that the linguistic variables associated with the rule, the antecedent and consequent values, are represented as weights in the resulting neural structure. The structure used to represent these fuzzy rules allows learning of the membership grades of the associated linguistic variables. A self-organization procedure for obtaining the nucleus of rules for a fuzzy knowledge base is presented.
Measures of similarity among fuzzy concepts: A comparative analysis Many measures of similarity among fuzzy sets have been proposed in the literature, and some have been incorporated into linguistic approximation procedures. The motivations behind these measures are both geometric and set-theoretic. We briefly review 19 such measures and compare their performance in a behavioral experiment. For crudely categorizing pairs of fuzzy concepts as either “similar” or “dissimilar,” all measures performed well. For distinguishing between degrees of similarity or dissimilarity, certain measures were clearly superior and others were clearly inferior; for a few subjects, however, none of the distance measures adequately modeled their similarity judgments. Measures that account for ordering on the base variable proved to be more highly correlated with subjects' actual similarity judgments. And, surprisingly, the best measures were ones that focus on only one “slice” of the membership function. Such measures are easiest to compute and may provide insight into the way humans judge similarity among fuzzy concepts.
Sugeno controllers with a bounded number of rules are nowhere dense In literature various results can be found claiming that fuzzy controllers are universal approximators. In terms of topology this means that fuzzy controllers as subsets of adequate function spaces are dense. In this paper the topological structure of fuzzy controllers composed of a bounded number of rules is investigated. It turns out that these sets are nowhere dense (a topological notion indicating that the sets are: "almost discrete"). This means, that it is just the number of rules and, e.g. not the great variety of parameters of fuzzy controllers, why fuzzy controllers are universal approximators. (C) 1999 Elsevier Science B.V. All rights reserved.
The Vienna Definition Language
Multiple Attribute Decision Making Based on Generalized Aggregation Operators under Dual Hesitant Fuzzy Environment. We investigate the multiple attribute decision making (MADM) problems with dual hesitant fuzzy information. We first introduce some basic concepts and operations on dual hesitant fuzzy sets. Then, we develop some generalized dual hesitant fuzzy aggregation operators which encompass some existing operators as their particular cases and discuss their basic properties. Next, we apply the generalized dual hesitant fuzzy Choquet ordered aggregation (GDHFCOA) operator to deal with multiple attribute decision making problems under dual hesitant fuzzy environment. Finally, an illustrative example is given to show the developed method and demonstrate its practicality and effectiveness.
Connection admission control in ATM networks using survey-based type-2 fuzzy logic systems This paper presents a connection admission control (CAC) method that uses a type-2 fuzzy logic system (FLS). Type-2 FLSs can handle linguistic uncertainties. The linguistic knowledge about CAC is obtained from 30 computer network experts. A methodology for representing the linguistic knowledge using type-2 membership functions and processing surveys using type-2 FLS is proposed. The type-2 FLS provides soft decision boundaries, whereas a type-1 FLS provides a hard decision boundary. The soft decision boundaries can coordinate the cell loss ratio (CLR) and bandwidth utilization, which is impossible for the hard decision boundary.
Experimental study of intelligent controllers under uncertainty using type-1 and type-2 fuzzy logic Uncertainty is an inherent part in control systems used in real world applications. The use of new methods for handling incomplete information is of fundamental importance. Type-1 fuzzy sets used in conventional fuzzy systems cannot fully handle the uncertainties present in control systems. Type-2 fuzzy sets that are used in type-2 fuzzy systems can handle such uncertainties in a better way because they provide us with more parameters and more design degrees of freedom. This paper deals with the design of control systems using type-2 fuzzy logic for minimizing the effects of uncertainty produced by the instrumentation elements, environmental noise, etc. The experimental results are divided in two classes, in the first class, simulations of a feedback control system for a non-linear plant using type-1 and type-2 fuzzy logic controllers are presented; a comparative analysis of the systems' response in both cases was performed, with and without the presence of uncertainty. For the second class, a non-linear identification problem for time-series prediction is presented. Based on the experimental results the conclusion is that the best results are obtained using type-2 fuzzy systems.
Is there a need for fuzzy logic? ''Is there a need for fuzzy logic?'' is an issue which is associated with a long history of spirited discussions and debate. There are many misconceptions about fuzzy logic. Fuzzy logic is not fuzzy. Basically, fuzzy logic is a precise logic of imprecision and approximate reasoning. More specifically, fuzzy logic may be viewed as an attempt at formalization/mechanization of two remarkable human capabilities. First, the capability to converse, reason and make rational decisions in an environment of imprecision, uncertainty, incompleteness of information, conflicting information, partiality of truth and partiality of possibility - in short, in an environment of imperfect information. And second, the capability to perform a wide variety of physical and mental tasks without any measurements and any computations [L.A. Zadeh, From computing with numbers to computing with words - from manipulation of measurements to manipulation of perceptions, IEEE Transactions on Circuits and Systems 45 (1999) 105-119; L.A. Zadeh, A new direction in AI - toward a computational theory of perceptions, AI Magazine 22 (1) (2001) 73-84]. In fact, one of the principal contributions of fuzzy logic - a contribution which is widely unrecognized - is its high power of precisiation. Fuzzy logic is much more than a logical system. It has many facets. The principal facets are: logical, fuzzy-set-theoretic, epistemic and relational. Most of the practical applications of fuzzy logic are associated with its relational facet. In this paper, fuzzy logic is viewed in a nonstandard perspective. In this perspective, the cornerstones of fuzzy logic - and its principal distinguishing features - are: graduation, granulation, precisiation and the concept of a generalized constraint. A concept which has a position of centrality in the nontraditional view of fuzzy logic is that of precisiation. Informally, precisiation is an operation which transforms an object, p, into an object, p^*, which in some specified sense is defined more precisely than p. The object of precisiation and the result of precisiation are referred to as precisiend and precisiand, respectively. In fuzzy logic, a differentiation is made between two meanings of precision - precision of value, v-precision, and precision of meaning, m-precision. Furthermore, in the case of m-precisiation a differentiation is made between mh-precisiation, which is human-oriented (nonmathematical), and mm-precisiation, which is machine-oriented (mathematical). A dictionary definition is a form of mh-precisiation, with the definiens and definiendum playing the roles of precisiend and precisiand, respectively. Cointension is a qualitative measure of the proximity of meanings of the precisiend and precisiand. A precisiand is cointensive if its meaning is close to the meaning of the precisiend. A concept which plays a key role in the nontraditional view of fuzzy logic is that of a generalized constraint. If X is a variable then a generalized constraint on X, GC(X), is expressed as X isr R, where R is the constraining relation and r is an indexical variable which defines the modality of the constraint, that is, its semantics. The primary constraints are: possibilistic, (r=blank), probabilistic (r=p) and veristic (r=v). The standard constraints are: bivalent possibilistic, probabilistic and bivalent veristic. In large measure, science is based on standard constraints. Generalized constraints may be combined, qualified, projected, propagated and counterpropagated. The set of all generalized constraints, together with the rules which govern generation of generalized constraints, is referred to as the generalized constraint language, GCL. The standard constraint language, SCL, is a subset of GCL. In fuzzy logic, propositions, predicates and other semantic entities are precisiated through translation into GCL. Equivalently, a semantic entity, p, may be precisiated by representing its meaning as a generalized constraint. By construction, fuzzy logic has a much higher level of generality than bivalent logic. It is the generality of fuzzy logic that underlies much of what fuzzy logic has to offer. Among the important contributions of fuzzy logic are the following: 1.FL-generalization. Any bivalent-logic-based theory, T, may be FL-generalized, and hence upgraded, through addition to T of concepts and techniques drawn from fuzzy logic. Examples: fuzzy control, fuzzy linear programming, fuzzy probability theory and fuzzy topology. 2.Linguistic variables and fuzzy if-then rules. The formalism of linguistic variables and fuzzy if-then rules is, in effect, a powerful modeling language which is widely used in applications of fuzzy logic. Basically, the formalism serves as a means of summarization and information compression through the use of granulation. 3.Cointensive precisiation. Fuzzy logic has a high power of cointensive precisiation. This power is needed for a formulation of cointensive definitions of scientific concepts and cointensive formalization of human-centric fields such as economics, linguistics, law, conflict resolution, psychology and medicine. 4.NL-Computation (computing with words). Fuzzy logic serves as a basis for NL-Computation, that is, computation with information described in natural language. NL-Computation is of direct relevance to mechanization of natural language understanding and computation with imprecise probabilities. More generally, NL-Computation is needed for dealing with second-order uncertainty, that is, uncertainty about uncertainty, or uncertainty^2 for short. In summary, progression from bivalent logic to fuzzy logic is a significant positive step in the evolution of science. In large measure, the real-world is a fuzzy world. To deal with fuzzy reality what is needed is fuzzy logic. In coming years, fuzzy logic is likely to grow in visibility, importance and acceptance.
Learning and classification of monotonic ordinal concepts
Robustness of fuzzy connectives and fuzzy reasoning. In fuzzy control, practical fuzzy reasoning schemes are likely to be perturbed by various types of noise, and thus analysis of the stability and robustness of fuzzy reasoning are important issue. We used a concept similar to the modulus of continuity to characterize the robustness of fuzzy connectives and present robustness results for various fuzzy connectives. We investigated the robustness of fuzzy reasoning from the perspective of perturbation of membership functions. We propose a method for judging the most robust elements of different classes of fuzzy connectives. The results obtained are compared with previous findings in the literature.
Performance analysis of partial segmented compressed sampling Recently, a segmented AIC (S-AIC) structure that measures the analog signal by K parallel branches of mixers and integrators (BMIs) was proposed by Taheri and Vorobyov (2011). Each branch is characterized by a random sampling waveform and implements integration in several continuous and non-overlapping time segments. By permuting the subsamples collected by each segment at different BMIs, more than K samples can be generated. To reduce the complexity of the S-AIC, in this paper we propose a partial segmented AIC (PS-AIC) structure, where K branches are divided into J groups and each group, acting as an independent S-AIC, only works within a partial period that is non-overlapping in time. Our structure is inspired by the recent validation that block diagonal matrices satisfy the restricted isometry property (RIP). Using this fact, we prove that the equivalent measurement matrix of the PS-AIC satisfies the RIP when the number of samples exceeds a certain threshold. Furthermore, the recovery performance of the proposed scheme is developed, where the analytical results show its performance gain when compared with the conventional AIC. Simulations verify the effectiveness of the PS-AIC and the validity of our theoretical results.
Stochastic Behavioral Modeling and Analysis for Analog/Mixed-Signal Circuits It has become increasingly challenging to model the stochastic behavior of analog/mixed-signal (AMS) circuits under large-scale process variations. In this paper, a novel moment-matching-based method has been proposed to accurately extract the probabilistic behavioral distributions of AMS circuits. This method first utilizes Latin hypercube sampling coupling with a correlation control technique to generate a few samples (e.g., sample size is linear with number of variable parameters) and further analytically evaluate the high-order moments of the circuit behavior with high accuracy. In this way, the arbitrary probabilistic distributions of the circuit behavior can be extracted using moment-matching method. More importantly, the proposed method has been successfully applied to high-dimensional problems with linear complexity. The experiments demonstrate that the proposed method can provide up to 1666X speedup over crude Monte Carlo method for the same accuracy.
1.039587
0.00227
0.001061
0.000225
0.000071
0.00001
0.000002
0
0
0
0
0
0
0
Optimal margin computation for at-speed test In the face of increased process variations, at-speed manufacturing test is necessary to detect subtle delay defects. This procedure necessarily tests chips at a slightly higher speed than the target frequency required in the field. The additional performance required on the tester is called test margin. There are many good reasons for margin including voltage and temperature requirements, incomplete test coverage, aging effects, coupling effects and accounting for modeling inaccuracies. By taking advantage of statistical timing, this paper proposes an optimal method of test margin determination to maximize yield while staying within a prescribed Shipped Product Quality Loss (SPQL) limit. If process information is available from wafer testing of scribe line structures or on-chip process monitoring circuitry, this information can be leveraged to determine a per-chip test margin which can further improve yield.
Eagle-Eye: A near-optimal statistical framework for noise sensor placement The relentless technology scaling has led to significantly reduced noise margin and complicated functionalities. As such, design time techniques per se are less likely to ensure power integrity, resulting in runtime voltage emergencies. To alleviate the issue, recently several works have shed light on the possibilities of dynamic noise management systems. Most of these works rely on on-chip noise sensors to accurately capture voltage emergencies. However, they all assume, either implicitly or explicitly, that the placement of the sensors is given. It remains an open problem in the literature how to optimally place a given number of noise sensors for best voltage emergency detection. In this paper, we formally define the problem of noise sensor placement along with a novel sensing quality metric (SQM) to be maximized. We then put forward an efficient algorithm to solve it, which is proved to be optimal in the class of polynomial complexity approximations. Experimental results on a set of industrial power grid designs show that compared with a simple average-noise based heuristic and two state-of-the-art temperature sensor placement algorithms aiming at recovering the full map or capturing the hot spots at all times, the proposed method on average can reduce the miss rate of voltage emergency detections by 7.4x, 15x and 6.2x, respectively.
A hierarchy of subgraphs underlying a timing graph and its use in capturing topological correlation in SSTA This paper shows that a timing graph has a hierarchy of specially defined subgraphs, based on which we present a technique that captures topological correlation in arbitrary block-based statistical static timing analysis (SSTA). We interpret a timing graph as an algebraic expression made up of addition and maximum operators. We define the division operation on the expression and propose algorithms that modify factors in the expression without expansion. As a result, they produce an expression to derive the latest arrival time with better accuracy in SSTA. Existing techniques handling reconvergent fanouts usually use dependency lists, requiring quadratic space complexity. Instead, the proposed technique has linear space complexity by using a new directed acyclic graph search algorithm. Our results show that it outperforms an existing technique in speed and memory usage with comparable accuracy.
Statistical multilayer process space coverage for at-speed test Increasingly large process variations make selection of a set of critical paths for at-speed testing essential yet challenging. This paper proposes a novel multilayer process space coverage metric to quantitatively gauge the quality of path selection. To overcome the exponential complexity in computing such a metric, this paper reveals its relationship to a concept called order statistics for a set of correlated random variables, efficient computation of which is a hitherto open problem in the literature. This paper then develops an elegant recursive algorithm to compute the order statistics (or the metric) in provable linear time and space. With a novel data structure, the order statistics can also be incrementally updated. By employing a branch-and-bound path selection algorithm with above techniques, this paper shows that selecting an optimal set of paths for a multi-million-gate design can be performed efficiently. Compared to the state-of-the-art, experimental results show both the efficiency of our algorithms and better quality of our path selection.
Incremental criticality and yield gradients Criticality and yield gradients are two crucial diagnostic metrics obtained from Statistical Static Timing Analysis (SSTA). They provide valuable information to guide timing optimization and timing-driven physical synthesis. Existing work in the literature, however, computes both metrics in a non-incremental manner, i.e., after one or more changes are made in a previously-timed circuit, both metrics need to be recomputed from scratch, which is obviously undesirable for optimizing large circuits. The major contribution of this paper is to propose two novel techniques to compute both criticality and yield gradients efficiently and incrementally. In addition, while node and edge criticalities are addressed in the literature, this paper for the first time describes a technique to compute path criticalities. To further improve algorithmic efficiency, this paper also proposes a novel technique to update "chip slack" incrementally. Numerical results show our methods to be over two orders of magnitude faster than previous work.
Fast and accurate statistical criticality computation under process variations With ever-shrinking device geometries, process variations play an increased role in determining the delay of a digital circuit. Under such variations, a gate may lie on the critical path of a manufactured die with a certain probability, called the criticality probability. In this paper, we present a new technique to compute the statistical criticality information in a digital circuit under process variations by linearly traversing the edges in its timing graph and dividing it into "zones." We investigate the sources of error in using tightness probabilities for criticality computation with Clark's statistical maximum formulation. The errors are dealt with using a new clustering-based pruning algorithm which greatly reduces the size of circuit-level cutsets improving both accuracy and runtime over the current state of the art. On large benchmark circuits, our clustering algorithm gives about a 250× speedup compared with a pairwise pruning strategy with similar accuracy in results. Coupled with a localized sampling technique, errors are reduced to around 5% of Monte Carlo simulations with large speedups in runtime.
Statistical Timing Analysis: From Basic Principles to State of the Art Static-timing analysis (STA) has been one of the most pervasive and successful analysis engines in the design of digital circuits for the last 20 years. However, in recent years, the increased loss of predictability in semiconductor devices has raised concern over the ability of STA to effectively model statistical variations. This has resulted in extensive research in the so-called statistical STA (SSTA), which marks a significant departure from the traditional STA framework. In this paper, we review the recent developments in SSTA. We first discuss its underlying models and assumptions, then survey the major approaches, and close by discussing its remaining key challenges.
Estimators and tail bounds for dimension reduction in lα (0 < α ≤ 2) using stable random projections. The method of stable random projections is popular in data stream computations, data mining, information retrieval, and machine learning, for efficiently computing the lα (0 < α ≤ 2) distances using a small (memory) space, in one pass of the data. We propose algorithms based on (1) the geometric mean estimator, for all 0 <α ≤ 2, and (2) the harmonic mean estimator, only for small α (e.g., α < 0.344). Compared with the previous classical work [27], our main contributions include: • The general sample complexity bound for α ≠ 1,2. For α = 1, [27] provided a nice argument based on the inverse of Cauchy density about the median, leading to a sample complexity bound, although they did not provide the constants and their proof restricted ε to be "small enough." For general α ≠ 1, 2, however, the task becomes much more difficult. [27] provided the "conceptual promise" that the sample complexity bound similar to that for α = 1 should exist for general α, if a "non-uniform algorithm based on t-quantile" could be implemented. Such a conceptual algorithm was only for supporting the arguments in [27], not a real implementation. We consider this is one of the main problems left open in [27]. In this study, we propose a practical algorithm based on the geometric mean estimator and derive the sample complexity bound for all 0 < α ≤ 2. • The practical and optimal algorithm for α = 0+ The l0 norm is an important case. Stable random projections can provide an approximation to the l0 norm using α → 0+. We provide an algorithm based on the harmonic mean estimator, which is simple and statistically optimal. Its tail bounds are sharper than the bounds derived based on the geometric mean. We also discover a (possibly surprising) fact: in boolean data, stable random projections using α = 0+ with the harmonic mean estimator will be about twice as accurate as (l2) normal random projections. Because high-dimensional boolean data are common, we expect this fact will be practically quite useful. • The precise theoretical analysis and practical implications We provide the precise constants in the tail bounds for both the geometric mean and harmonic mean estimators. We also provide the variances (either exact or asymptotic) for the proposed estimators. These results can assist practitioners to choose sample sizes accurately.
Hierarchical Data Aggregation Using Compressive Sensing (HDACS) in WSNs Energy efficiency is one of the key objectives in data gathering in wireless sensor networks (WSNs). Recent research on energy-efficient data gathering in WSNs has explored the use of Compressive Sensing (CS) to parsimoniously represent the data. However, the performance of CS-based data gathering methods has been limited since the approaches failed to take advantage of judicious network configurations and effective CS-based data aggregation procedures. In this article, a novel Hierarchical Data Aggregation method using Compressive Sensing (HDACS) is presented, which combines a hierarchical network configuration with CS. Our key idea is to set multiple compression thresholds adaptively based on cluster sizes at different levels of the data aggregation tree to optimize the amount of data transmitted. The advantages of the proposed model in terms of the total amount of data transmitted and data compression ratio are analytically verified. Moreover, we formulate a new energy model by factoring in both processor and radio energy consumption into the cost, especially the computation cost incurred in relatively complex algorithms. We also show that communication cost remains dominant in data aggregation in the practical applications of large-scale networks. We use both the real-world data and synthetic datasets to test CS-based data aggregation schemes on the SIDnet-SWANS simulation platform. The simulation results demonstrate that the proposed HDACS model guarantees accurate signal recovery performance. It also provides substantial energy savings compared with existing methods.
Statistical ordering of correlated timing quantities and its application for path ranking Correct ordering of timing quantities is essential for both timing analysis and design optimization in the presence of process variation, because timing quantities are no longer a deterministic value, but a distribution. This paper proposes a novel metric, called tiered criticalities, which guarantees to provide a unique order for a set of correlated timing quantities while properly taking into account full process space coverage. Efficient algorithms are developed to compute this metric, and its effectiveness on path ranking for at-speed testing is also demonstrated.
A theoretical framework for possibilistic independence in a weakly ordered setting The notion of independence is central in many information processing areas, such as multiple criteria decision making, databases, or uncertain reasoning. This is especially true in the later case, where the success of Bayesian networks is basically due to the graphical representation of independence they provide. This paper first studies qualitative independence relations when uncertainty is encoded by a complete pre-order between states of the world. While a lot of work has focused on the formulation of suitable definitions of independence in uncertainty theories our interest in this paper is rather to formulate a general definition of independence based on purely ordinal considerations, and that applies to all weakly ordered settings. The second part of the paper investigates the impact of the embedding of qualitative independence relations into the scale-based possibility theory. The absolute scale used in this setting enforces the commensurateness between local pre-orders (since they share the same scale). This leads to an easy decomposability property of the joint distributions into more elementary relations on the basis of the independence relations. Lastly we provide a comparative study between already known definitions of possibilistic independence and the ones proposed here.
Hierarchical statistical characterization of mixed-signal circuits using behavioral modeling A methodology for hierarchical statistical circuit characterization which does not rely upon circuit-level Monte Carlo simulation is presented. The methodology uses principal component analysis, response surface methodology, and statistics to directly calculate the statistical distributions of higher-level parameters from the distributions of lower-level parameters. We have used the methodology to characterize a folded cascode operational amplifier and a phase-locked loop. This methodology permits the statistical characterization of large analog and mixed-signal systems, many of which are extremely time-consuming or impossible to characterize using existing methods.
Selecting the advanced manufacturing technology using fuzzy multiple attributes group decision making with multiple fuzzy information Selection of advanced manufacturing technology in manufacturing system management is very important to determining manufacturing system competitiveness. This research develops a fuzzy multiple attribute decision-making applied in the group decision-making to improving advanced manufacturing technology selection process. Since numerous attributes have been considered in evaluating the manufacturing technology suitability, most information available in this stage is subjective, imprecise and vague, fuzzy sets theory provides a mathematical framework for modeling imprecision and vagueness. In the proposed approach, a new fusion method of fuzzy information is developed to managing information assessed in different linguistic scales (multi-granularity linguistic term sets) and numerical scales. The flexible manufacturing system adopted in the Taiwanese bicycle industry is employed in this study to demonstrate the computational process of the proposed method. Finally, sensitivity analysis can be performed to examine that the solution robustness.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.11
0.033333
0.0325
0.018056
0.008377
0.00125
0.000227
0
0
0
0
0
0
0
Polynomial chaos representation of spatio-temporal random fields from experimental measurements Two numerical techniques are proposed to construct a polynomial chaos (PC) representation of an arbitrary second-order random vector. In the first approach, a PC representation is constructed by matching a target joint probability density function (pdf) based on sequential conditioning (a sequence of conditional probability relations) in conjunction with the Rosenblatt transformation. In the second approach, the PC representation is obtained by having recourse to the Rosenblatt transformation and simultaneously matching a set of target marginal pdfs and target Spearman's rank correlation coefficient (SRCC) matrix. Both techniques are applied to model an experimental spatio-temporal data set, exhibiting strong non-stationary and non-Gaussian features. The data consists of a set of oceanographic temperature records obtained from a shallow-water acoustics transmission experiment [1]. The measurement data, observed over a finite denumerable subset of the indexing set of the random process, is treated as a collection of observed samples of a second-order random vector that can be treated as a finite-dimensional approximation of the original random field. A set of properly ordered conditional pdfs, that uniquely characterizes the target joint pdf, in the first approach and a set of target marginal pdfs and a target SRCC matrix, in the second approach, are estimated from available experimental data. Digital realizations sampled from the constructed PC representations based on both schemes capture the observed statistical characteristics of the experimental data with sufficient accuracy. The relative advantages and disadvantages of the two proposed techniques are also highlighted.
A multilevel finite element method for Fredholm integral eigenvalue problems In this work, we proposed a multigrid finite element (MFE) method for solving the Fredholm integral eigenvalue problems. The main motivation for such studies is to compute the Karhunen-Loève expansions of random fields, which play an important role in the applications of uncertainty quantification. In our MFE framework, solving the eigenvalue problem is converted to doing a series of integral iterations and eigenvalue solving in the coarsest mesh. Then, any existing efficient integration scheme can be used for the associated integration process. The error estimates are provided, and the computational complexity is analyzed. It is noticed that the total computational work of our method is comparable with a single integration step in the finest mesh. Several numerical experiments are presented to validate the efficiency of the proposed numerical method.
Identification of Bayesian posteriors for coefficients of chaos expansions This article is concerned with the identification of probabilistic characterizations of random variables and fields from experimental data. The data used for the identification consist of measurements of several realizations of the uncertain quantities that must be characterized. The random variables and fields are approximated by a polynomial chaos expansion, and the coefficients of this expansion are viewed as unknown parameters to be identified. It is shown how the Bayesian paradigm can be applied to formulate and solve the inverse problem. The estimated polynomial chaos coefficients are hereby themselves characterized as random variables whose probability density function is the Bayesian posterior. This allows to quantify the impact of missing experimental information on the accuracy of the identified coefficients, as well as on subsequent predictions. An illustration in stochastic aeroelastic stability analysis is provided to demonstrate the proposed methodology.
Identification of Polynomial Chaos Representations in High Dimension from a Set of Realizations. This paper deals with the identification in high dimensions of a polynomial chaos expansion of random vectors from a set of realizations. Due to numerical and memory constraints, the usual polynomial chaos identification methods are based on a series of truncations that induce a numerical bias. This bias becomes very detrimental to the convergence analysis of polynomial chaos identification in high dimensions. This paper therefore proposes a new formulation of the usual polynomial chaos identification algorithms to avoid this numerical bias. After a review of the polynomial chaos identification method, the influence of the numerical bias on the identification accuracy is quantified. The new formulation is then described in detail and illustrated using two examples.
Karhunen-Loève expansion revisited for vector-valued random fields: Scaling, errors and optimal basis. Due to scaling effects, when dealing with vector-valued random fields, the classical Karhunen-Loeve expansion, which is optimal with respect to the total mean square error, tends to favorize the components of the random field that have the highest signal energy. When these random fields are to be used in mechanical systems, this phenomenon can introduce undesired biases for the results. This paper presents therefore an adaptation of the Karhunen-Loeve expansion that allows us to control these biases and to minimize them. This original decomposition is first analyzed from a theoretical point of view, and is then illustrated on a numerical example.
Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems We consider a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a spatial or temporal field, endowed with a hierarchical Gaussian process prior. Computational challenges in this construction arise from the need for repeated evaluations of the forward model (e.g., in the context of Markov chain Monte Carlo) and are compounded by high dimensionality of the posterior. We address these challenges by introducing truncated Karhunen-Loeve expansions, based on the prior distribution, to efficiently parameterize the unknown field and to specify a stochastic forward problem whose solution captures that of the deterministic forward model over the support of the prior. We seek a solution of this problem using Galerkin projection on a polynomial chaos basis, and use the solution to construct a reduced-dimensionality surrogate posterior density that is inexpensive to evaluate. We demonstrate the formulation on a transient diffusion equation with prescribed source terms, inferring the spatially-varying diffusivity of the medium from limited and noisy data.
Convergence Rates of Best N-term Galerkin Approximations for a Class of Elliptic sPDEs Deterministic Galerkin approximations of a class of second order elliptic PDEs with random coefficients on a bounded domain D⊂ℝd are introduced and their convergence rates are estimated. The approximations are based on expansions of the random diffusion coefficients in L 2(D)-orthogonal bases, and on viewing the coefficients of these expansions as random parameters y=y(ω)=(y i (ω)). This yields an equivalent parametric deterministic PDE whose solution u(x,y) is a function of both the space variable x∈D and the in general countably many parameters y. We establish new regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to $V=H^{1}_{0}(D)$. These results lead to analytic estimates on the V norms of the coefficients (which are functions of x) in a so-called “generalized polynomial chaos” (gpc) expansion of u. Convergence estimates of approximations of u by best N-term truncated V valued polynomials in the variable y∈U are established. These estimates are of the form N −r , where the rate of convergence r depends only on the decay of the random input expansion. It is shown that r exceeds the benchmark rate 1/2 afforded by Monte Carlo simulations with N “samples” (i.e., deterministic solves) under mild smoothness conditions on the random diffusion coefficients. A class of fully discrete approximations is obtained by Galerkin approximation from a hierarchic family $\{V_{l}\}_{l=0}^{\infty}\subset V$of finite element spaces in D of the coefficients in the N-term truncated gpc expansions of u(x,y). In contrast to previous works, the level l of spatial resolution is adapted to the gpc coefficient. New regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to a smoothness space W⊂V are established leading to analytic estimates on the W norms of the gpc coefficients and on their space discretization error. The space W coincides with $H^{2}(D)\cap H^{1}_{0}(D)$in the case where D is a smooth or convex domain. Our analysis shows that in realistic settings a convergence rate $N_{\mathrm{dof}}^{-s}$in terms of the total number of degrees of freedom N dof can be obtained. Here the rate s is determined by both the best N-term approximation rate r and the approximation order of the space discretization in D.
Active Subspace Methods in Theory and Practice: Applications to Kriging Surfaces. Many multivariate functions in engineering models vary primarily along a few directions in the space of input parameters. When these directions correspond to coordinate directions, one may apply global sensitivity measures to determine the most influential parameters. However, these methods perform poorly when the directions of variability are not aligned with the natural coordinates of the input space. We present a method to first detect the directions of the strongest variability using evaluations of the gradient and subsequently exploit these directions to construct a response surface on a low-dimensional subspace-i.e., the active subspace-of the inputs. We develop a theoretical framework with error bounds, and we link the theoretical quantities to the parameters of a kriging response surface on the active subspace. We apply the method to an elliptic PDE model with coefficients parameterized by 100 Gaussian random variables and compare it with a local sensitivity analysis method for dimension reduction.
An Introduction To Compressive Sampling Conventional approaches to sampling signals or images follow Shannon&#39;s theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article s...
Similarity relations and fuzzy orderings. The notion of ''similarity'' as defined in this paper is essentially a generalization of the notion of equivalence. In the same vein, a fuzzy ordering is a generalization of the concept of ordering. For example, the relation x @? y (x is much larger than y) is a fuzzy linear ordering in the set of real numbers. More concretely, a similarity relation, S, is a fuzzy relation which is reflexive, symmetric, and transitive. Thus, let x, y be elements of a set X and @m"s(x,y) denote the grade of membership of the ordered pair (x,y) in S. Then S is a similarity relation in X if and only if, for all x, y, z in X, @m"s(x,x) = 1 (reflexivity), @m"s(x,y) = @m"s(y,x) (symmetry), and @m"s(x,z) = @? (@m"s(x,y) A @m"s(y,z)) (transitivity), where @? and A denote max and min, respectively. ^y A fuzzy ordering is a fuzzy relation which is transitive. In particular, a fuzzy partial ordering, P, is a fuzzy ordering which is reflexive and antisymmetric, that is, (@m"P(x,y) 0 and x y) @? @m"P(y,x) = 0. A fuzzy linear ordering is a fuzzy partial ordering in which x y @? @m"s(x,y) 0 or @m"s(y,x) 0. A fuzzy preordering is a fuzzy ordering which is reflexive. A fuzzy weak ordering is a fuzzy preordering in which x y @? @m"s(x,y) 0 or @m"s(y,x) 0. Various properties of similarity relations and fuzzy orderings are investigated and, as an illustration, an extended version of Szpilrajn's theorem is proved.
Real-time constrained TCP-compatible rate control for video over the Internet This paper describes a rate control algorithm that captures not only the behavior of TCP's congestion control avoidance mechanism but also the delay constraints of real-time streams. Building upon the TFRC protocol , a new protocol has been designed for estimating the bandwidth prediction model parameters. Making use of RTP and RTCP, this protocol allows to better take into account the multimedia flows characteristics (variable packet size, delay ...). Given the current channel state estimated by the above protocol, encoder and decoder buffers states as well as delay constraints of the real-time video source are translated into encoder rate constraints. This global rate control model, coupled with an H.263+ loss resilient video compression algorithm, has been extensively experimented with on various Internet links. The experiments clearly demonstrate the benefits of 1/ the new protocol used for estimating the bandwidth prediction model parameters, adapted to multimedia flows characteristics, and of 2/ the global rate control model encompassing source buffers and end-to-end delay characteristics. The overall system leads to reduce significantly the source timeouts, hence to minimize the expected distortion, for a comparable usage of the TCP-compatible predicted bandwidth.
Estimation of FMAX and ISB in microprocessors Inherent process device variations and fluctuations during manufacturing have a large impact on the microprocessor maximum clock frequency and total leakage power. These fluctuations have a statistical distribution that calls for usage of statistical methods for frequency and leakage analysis. This paper presents a simple technique for accurate estimation of product high-level (Full Chip) parameters such as the maximum frequency (FMAX) distribution and the total leakage (ISB). Moreover, this technique can grade critical paths by their failure probability and perform what-if analysis to estimate FMAX after fixing specific speed paths. Using our FMAX/ISB prediction, we show good correlation with silicon measurements from a production microprocessor.
Factors influencing quality of experience of commonly used mobile applications. Increasingly, we use mobile applications and services in our daily life activities, to support our needs for information, communication or leisure. However, user acceptance of a mobile application depends on at least two conditions: the application&#39;s perceived experience, and the appropriateness of the application to the user&#39;s context and needs. However, we have a weak understanding of a mobile u...
Fuzzy OWA model for information security risk management One of the methods for information security risk assessment is the substantiated choice and realization of countermeasures against threats. A situational fuzzy OWA model of a multicriteria decision making problem concerning the choice of countermeasures for reducing information security risks is proposed. The proposed model makes it possible to modify the associated weights of criteria based on the information entropy with respect to the aggregation situation. The advantage of the model is the continuous improvement of the weights of the criteria and the aggregation of experts’ opinions depending on the parameter characterizing the aggregation situation.
1.028076
0.022222
0.011595
0.01005
0.007619
0.002301
0.000544
0.000081
0.000002
0
0
0
0
0
A generalized fuzzy weighted least-squares regression A fairly general fuzzy regression technique is proposed based on the least-squares approach. The main concept is to estimate the modal value and the spreads separately. In order to do this, the interactions between the modal value and the spreads are first analyzed in detail. The advantages of this new fuzzy weighted least-squares regression (FWLSR) approach are: (1) the estimation of both non-interactive and interactive fuzzy parameters can be performed by the same method, (2) the decision-makers' confidence in the gathered data and in the established model can be incorporated into the process, and (3) suspicious outliers (or fuzzy outliers), that is, data points that are obviously and suspiciously lying outside the usual range, can be treated and their effects can be reduced. A numerical example is provided to show that the proposed method can be an effective computational tool in fuzzy regression analysis.
Fuzzy estimates of regression parameters in linear regression models for imprecise input and output data The method for obtaining the fuzzy estimates of regression parameters with the help of "Resolution Identity" in fuzzy sets theory is proposed. The α-level least-squares estimates can be obtained from the usual linear regression model by using the α-level real-valued data of the corresponding fuzzy input and output data. The membership functions of fuzzy estimates of regression parameters will be constructed according to the form of "Resolution Identity" based on the α-level least-squares estimates. In order to obtain the membership degree of any given value taken from the fuzzy estimate, optimization problems have to be solved. Two computational procedures are also provided to solve the optimization problems.
A practical approach to nonlinear fuzzy regression This paper presents a new method of mathematical modeling in an uncertain environment. The uncertainties of data and model are treated using concepts of fuzzy set theory. The model fitting principle is the minimization of a least squares objective function. A practical modeling procedure is obtained by restricting the type of data and parameter fuzziness to conical membership functions. Under this restriction, the model fitting problem can be solved numerically with the aid of any least squares software for regression with implicit constraint equations. The paper contains a short discussion of the geometry of fuzzy point and function spaces with conical membership functions, and illustrates the application of fuzzy regression with an example from terminal ballistics.
Fuzzy Regression Analysis by Support Vector Learning Approach Support vector machines (SVMs) have been very successful in pattern classification and function approximation problems for crisp data. In this paper, we incorporate the concept of fuzzy set theory into the support vector regression machine. The parameters to be estimated in the SVM regression, such as the components within the weight vector and the bias term, are set to be the fuzzy numbers. This integration preserves the benefits of SVM regression model and fuzzy regression model and has been attempted to treat fuzzy nonlinear regression analysis. In contrast to previous fuzzy nonlinear regression models, the proposed algorithm is a model-free method in the sense that we do not have to assume the underlying model function. By using different kernel functions, we can construct different learning machines with arbitrary types of nonlinear regression functions. Moreover, the proposed method can achieve automatic accuracy control in the fuzzy regression analysis task. The upper bound on number of errors is controlled by the user-predefined parameters. Experimental results are then presented that indicate the performance of the proposed approach.
The concept of a linguistic variable and its application to approximate reasoning—I By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. For example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e.,young, not young, very young, quite young, old, not very old and not very young, etc., rather than 20, 21,22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (L>, T(L), U,G,M) in which L is the name of the variable; T(L) is the term-set of L, that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(L); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U. The meaning of a linguistic value X is characterized by a compatibility function, c: U → [0,1], which associates with each u in U its compatibility with X. Thus, the compatibility of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compatibilities of the so-called primary terms in a composite linguistic value-e.g., young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectives and and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The concept of a linguistic variable provides a means of approximate characterization of phenomena which are too complex or too ill-defined to be amenable to description in conventional quantitative terms. In particular, treating Truth as a linguistic variable with values such as true, very true, completely true, not very true, untrue, etc., leads to what is called fuzzy logic. By providing a basis for approximate reasoning, that is, a mode of reasoning which is not exact nor very inexact, such logic may offer a more realistic framework for human reasoning than the traditional two-valued logic. It is shown that probabilities, too, can be treated as linguistic variables with values such as likely, very likely, unlikely, etc. Computation with linguistic probabilities requires the solution of nonlinear programs and leads to results which are imprecise to the same degree as the underlying probabilities. The main applications of the linguistic approach lie in the realm of humanistic systems-especially in the fields of artificial intelligence, linguistics, human decision processes, pattern recognition, psychology, law, medical diagnosis, information retrieval, economics and related areas.
Artificial Paranoia
Hedges: A study in meaning criteria and the logic of fuzzy concepts
Measures of similarity among fuzzy concepts: A comparative analysis Many measures of similarity among fuzzy sets have been proposed in the literature, and some have been incorporated into linguistic approximation procedures. The motivations behind these measures are both geometric and set-theoretic. We briefly review 19 such measures and compare their performance in a behavioral experiment. For crudely categorizing pairs of fuzzy concepts as either “similar” or “dissimilar,” all measures performed well. For distinguishing between degrees of similarity or dissimilarity, certain measures were clearly superior and others were clearly inferior; for a few subjects, however, none of the distance measures adequately modeled their similarity judgments. Measures that account for ordering on the base variable proved to be more highly correlated with subjects' actual similarity judgments. And, surprisingly, the best measures were ones that focus on only one “slice” of the membership function. Such measures are easiest to compute and may provide insight into the way humans judge similarity among fuzzy concepts.
Proactive public key and signature systems Emerging applications like electronic commerce and secure communications over open networks have made clear the fundamental role of public key cryptography as a unique enabler for world-wide scale security solu- tions. On the other hand, these solutions clearly expose the fact that the protection of private keys is a security bottleneck in these sensitive applications. This prob- lem is further worsened in the cases where a single and unchanged private key must be kept secret for very long time (such is the case of certification authority keys, bank and e-cash keys, etc.). One crucial defense against exposure of private keys is offered by threshold cryptography where the pri- vate key functions (like signatures or decryption) are distributed among several parties such that a predeter- mined number of parties must cooperate in order to correctly perform these operations. This protects keys from any single point of failure. An attacker needs to break into a multiplicity of locations before it can com- promise the system. However, in the case of long-lived keys the attacker still has a considerable period of time (like a few years) to gradually break the system. Here we present proactive public key systemswhere the threshold solutions are further enhanced by periodic
Multiple description coding: compression meets the network This article focuses on the compressed representations of pictures. The representation does not affect how many bits get from the Web server to the laptop, but it determines the usefulness of the bits that arrive. Many different representations are possible, and there is more involved in their choice than merely selecting a compression ratio. The techniques presented represent a single information...
Beyond streams and graphs: dynamic tensor analysis How do we find patterns in author-keyword associations, evolving over time? Or in Data Cubes, with product-branch-customer sales information? Matrix decompositions, like principal component analysis (PCA) and variants, are invaluable tools for mining, dimensionality reduction, feature selection, rule identification in numerous settings like streaming data, text, graphs, social networks and many more. However, they have only two orders, like author and keyword, in the above example.We propose to envision such higher order data as tensors,and tap the vast literature on the topic. However, these methods do not necessarily scale up, let alone operate on semi-infinite streams. Thus, we introduce the dynamic tensor analysis (DTA) method, and its variants. DTA provides a compact summary for high-order and high-dimensional data, and it also reveals the hidden correlations. Algorithmically, we designed DTA very carefully so that it is (a) scalable, (b) space efficient (it does not need to store the past) and (c) fully automatic with no need for user defined parameters. Moreover, we propose STA, a streaming tensor analysis method, which provides a fast, streaming approximation to DTA.We implemented all our methods, and applied them in two real settings, namely, anomaly detection and multi-way latent semantic indexing. We used two real, large datasets, one on network flow data (100GB over 1 month) and one from DBLP (200MB over 25 years). Our experiments show that our methods are fast, accurate and that they find interesting patterns and outliers on the real datasets.
A review on the design and optimization of interval type-2 fuzzy controllers A review of the methods used in the design of interval type-2 fuzzy controllers has been considered in this work. The fundamental focus of the work is based on the basic reasons for optimizing type-2 fuzzy controllers for different areas of application. Recently, bio-inspired methods have emerged as powerful optimization algorithms for solving complex problems. In the case of designing type-2 fuzzy controllers for particular applications, the use of bio-inspired optimization methods have helped in the complex task of finding the appropriate parameter values and structure of the fuzzy systems. In this review, we consider the application of genetic algorithms, particle swarm optimization and ant colony optimization as three different paradigms that help in the design of optimal type-2 fuzzy controllers. We also mention alternative approaches to designing type-2 fuzzy controllers without optimization techniques. We also provide a comparison of the different optimization methods for the case of designing type-2 fuzzy controllers.
Fuzzy Concepts and Formal Methods: A Fuzzy Logic Toolkit for Z It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. We provide a summary of a toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations. We also provide some examples of the laws which establishes an isomorphism between the extended notation presented here and conventional Z when applied to boolean sets and relations.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.060583
0.066667
0.054458
0.018417
0.000301
0.000013
0.000002
0
0
0
0
0
0
0
Stochastic approaches to uncertainty quantification in CFD simulations. This paper discusses two stochastic approaches to computing the propagation of uncertainty in numerical simulations: polynomial chaos and stochastic collocation. Chebyshev polynomials are used in both cases for the conventional, deterministic portion of the discretization in physical space. For the stochastic parameters, polynomial chaos utilizes a Galerkin approximation based upon expansions in Hermite polynomials, whereas stochastic collocation rests upon a novel transformation between the stochastic space and an artificial space. In our present implementation of stochastic collocation, Legendre interpolating polynomials are employed. These methods are discussed in the specific context of a quasi-one-dimensional nozzle flow with uncertainty in inlet conditions and nozzle shape. It is shown that both stochastic approaches efficiently handle uncertainty propagation. Furthermore, these approaches enable computation of statistical moments of arbitrary order in a much more effective way than other usual techniques such as the Monte Carlo simulation or perturbation methods. The numerical results indicate that the stochastic collocation method is substantially more efficient than the full Galerkin, polynomial chaos method. Moreover, the stochastic collocation method extends readily to highly nonlinear equations. An important application is to the stochastic Riemann problem, which is of particular interest for spectral discontinuous Galerkin methods.
Numerical Methods for Differential Equations in Random Domains Physical phenomena in domains with rough boundaries play an important role in a variety of applications. Often the topology of such boundaries cannot be accurately described in all of its relevant detail due to either insufficient data or measurement errors or both. This topological uncertainty can be efficiently handled by treating rough boundaries as random fields, so that an underlying physical phenomenon is described by deterministic or stochastic differential equations in random domains. To deal with this class of problems, we propose a novel computational framework, which is based on using stochastic mappings to transform the original deterministic/stochastic problem in a random domain into a stochastic problem in a deterministic domain. The latter problem has been studied more extensively, and existing analytical/numerical techniques can be readily applied. In this paper, we employ both a stochastic Galerkin method and Monte Carlo simulations to solve the transformed stochastic problem. We demonstrate our approach by applying it to an elliptic problem in single- and double-connected random domains, and comment on the accuracy and convergence of the numerical methods.
A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data Abstract In this paper we propose and analyze a Stochastic-Collocation method to solve elliptic Partial Difierential Equations with random,coe‐cients and forcing terms (input data of the model). The input data are assumed to depend on a flnite number of random,variables. The method consists in a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space and naturally leads to the solution of uncoupled deterministic prob- lems as in the Monte Carlo approach. It can be seen as a generalization of the Stochastic Galerkin method proposed in [Babu• ska -Tempone-Zouraris, SIAM J. Num. Anal. 42(2004)] and allows one to treat easily a wider range of situations, such as: input data that depend non-linearly on the random variables, difiusivity coe‐cients with unbounded second moments , random variables that are correlated or have unbounded support. We provide a rigorous convergence analysis and demonstrate exponential con- vergence of the \probability error" with respect of the number of Gauss points in each direction in the probability space, under some regularity assumptions on the random,input data. Numerical examples show the efiectiveness of the method. Key words: Collocation method, stochastic PDEs, flnite elements, un- certainty quantiflcation, exponential convergence. AMS subject classiflcation: 65N35, 65N15, 65C20
Stochastic analysis of transport in tubes with rough walls Flow and transport in tubes with rough surfaces play an important role in a variety of applications. Often the topology of such surfaces cannot be accurately described in all of its relevant details due to either insufficient data or measurement errors or both. In such cases, this topological uncertainty can be efficiently handled by treating rough boundaries as random fields, so that an underlying physical phenomenon is described by deterministic or stochastic differential equations in random domains. To deal with this class of problems, we use a computational framework, which is based on stochastic mappings to transform the original deterministic/stochastic problem in a random domain into a stochastic problem in a deterministic domain. The latter problem has been studied more extensively and existing analytical/numerical techniques can be readily applied. In this paper, we employ both a generalized polynomial chaos and Monte Carlo simulations to solve the transformed stochastic problem. We use our approach to describe transport of a passive scalar in Stokes' flow and to quantify the corresponding predictive uncertainty.
On tensor product approximation of analytic functions. We prove sharp, two-sided bounds on sums of the form ∑k∈N0d∖Da(T)exp(−∑j=1dajkj), where Da(T):={k∈N0d:∑j=1dajkj≤T} and a∈R+d. These sums appear in the error analysis of tensor product approximation, interpolation and integration of d-variate analytic functions. Examples are tensor products of univariate Fourier–Legendre expansions (Beck et al., 2014) or interpolation and integration rules at Leja points (Chkifa et al., 2013), (Narayan and Jakeman, 2014), (Nobile et al., 2014). Moreover, we discuss the limit d→∞, where we prove both, algebraic and sub-exponential upper bounds. As an application we consider tensor products of Hardy spaces, where we study convergence rates of a certain truncated Taylor series, as well as of interpolation and integration using Leja points.
A stochastic collocation method for the second order wave equation with a discontinuous random speed In this paper we propose and analyze a stochastic collocation method for solving the second order wave equation with a random wave speed and subjected to deterministic boundary and initial conditions. The speed is piecewise smooth in the physical space and depends on a finite number of random variables. The numerical scheme consists of a finite difference or finite element method in the physical space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space. This approach leads to the solution of uncoupled deterministic problems as in the Monte Carlo method. We consider both full and sparse tensor product spaces of orthogonal polynomials. We provide a rigorous convergence analysis and demonstrate different types of convergence of the probability error with respect to the number of collocation points for full and sparse tensor product spaces and under some regularity assumptions on the data. In particular, we show that, unlike in elliptic and parabolic problems, the solution to hyperbolic problems is not in general analytic with respect to the random variables. Therefore, the rate of convergence may only be algebraic. An exponential/fast rate of convergence is still possible for some quantities of interest and for the wave solution with particular types of data. We present numerical examples, which confirm the analysis and show that the collocation method is a valid alternative to the more traditional Monte Carlo method for this class of problems.
Karhunen-Loève approximation of random fields by generalized fast multipole methods KL approximation of a possibly instationary random field a(ω, x) ∈ L2(Ω,dP; L∞(D)) subject to prescribed meanfield Ea(x) = ∫Ω, a (ω x) dP(ω) and covariance Va(x,x') = ∫Ω(a(ω, x) - Ea(x))(a(ω, x') - Ea(x')) dP(ω) in a polyhedral domain D ⊂ Rd is analyzed. We show how for stationary covariances Va(x,x') = ga(|x - x'|) with ga(z) analytic outside of z = 0, an M-term approximate KL-expansion aM(ω, x) of a(ω, x) can be computed in log-linear complexity. The approach applies in arbitrary domains D and for nonseparable covariances Ca. It involves Galerkin approximation of the KL eigenvalue problem by discontinuous finite elements of degree p ≥ 0 on a quasiuniform, possibly unstructured mesh of width h in D, plus a generalized fast multipole accelerated Krylov-Eigensolver. The approximate KL-expansion aM(X, ω) of a(x, ω) has accuracy O(exp(-bM1/d)) if ga is analytic at z = 0 and accuracy O(M-k/d) if ga is Ck at zero. It is obtained in O(MN(logN)b) operations where N = O(h-d).
Average case complexity of multivariate integration for smooth functions We study the average case complexity of multivariate integration for the class of smooth functions equipped with the folded Wiener sheet measure. The complexity is derived by reducing this problem to multivariate integration in the worst case setting but for a different space of functions. Fully constructive optimal information and an optimal algorithm are presented. Next, fully constructive almost optimal information and an almost optimal algorithm are also presented which have some advantages for practical implementation.
Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions For $d$-dimensional tensors with possibly large $d3$, an hierarchical data structure, called the Tree-Tucker format, is presented as an alternative to the canonical decomposition. It has asymptotically the same (and often even smaller) number of representation parameters and viable stability properties. The approach involves a recursive construction described by a tree with the leafs corresponding to the Tucker decompositions of three-dimensional tensors, and is based on a sequence of SVDs for the recursively obtained unfolding matrices and on the auxiliary dimensions added to the initial “spatial” dimensions. It is shown how this format can be applied to the problem of multidimensional convolution. Convincing numerical examples are given.
Efficient Block-Based Parameterized Timing Analysis Covering All Potentially Critical Paths In order for the results of timing analysis to be useful, they must provide insight and guidance on how the circuit may be improved so as to fix any reported timing problems. A limitation of many recent variability-aware timing analysis techniques is that, while they report delay distributions, or verify multiple corners, they do not provide the required guidance for re-design. We propose an efficient block-based parameterized timing analysis technique that can accurately capture circuit delay at every point in the parameter space, by reporting all paths that can become critical. Using an efficient pruning algorithm, only those potentially critical paths are carried forward, while all other paths are discarded during propagation. This allows one to examine local robustness to parameters in different regions of the parameter space, not by considering differential sensitivity at a point (that would be useless in this context) but by knowledge of the paths that can become critical at nearby points in parameter space. We give a formal definition of this problem and propose a technique for solving it, which improves on the state of the art, both in terms of theoretical computational complexity and in terms of runtime on various test circuits.
From Finance to Flip Flops: A Study of Fast Quasi-Monte Carlo Methods from Computational Finance Applied to Statistical Circuit Analysis Problems in computational finance share many of the characteristics that challenge us in statistical circuit analysis: high dimensionality, profound nonlinearity, stringent accuracy requirements, and expensive sample simulation. We offer a detailed experimental study of how one celebrated technique from this domain - quasi-Monte Carlo (QMC) analysis - can be used for fast statistical circuit analysis. In contrast with traditional pseudo-random Monte Carlo sampling, QMC substitutes a (shorter) sequence of deterministically chosen sample points. Across a set of digital and analog circuits, in 90nm and 45nm technologies, varying in size from 30 to 400 devices, we obtain speedups in parametric yield estimation from 2times to 50times
Decision station: a notion for a situated DSS Despite the growing need for decision support in the digital age, there has not been an adequate increase of interest in research and development in Decision Support Systems (DSSs). In our view, the vision for a new type of DSS should provision a tighter integration with the problem domain and include implementation phase in addition to the traditional intelligence, design, and choice phases. We argue that an adequate DSS in our dynamic electronic era should be situated in the problem environment. We propose a generic architecture for such a DSS incorporating sensors, effectors, and enhanced interfaces in addition to the traditional DSS kernel. We suggest the term "Decision Station" to refer to such situated DSS. We further elaborate on the possibilities to implement situated DSS in different segments of e-business. We argue in favor of using intelligent agents as the basis of new type of DSS. We further propose an architecture and describe a prototype for such DSS.
Simple and practical algorithm for sparse Fourier transform We consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coefficients of the Fourier transform of x. The problem is of key interest in several areas, including signal processing, audio/image/video compression, and learning theory. We propose a new algorithm for this problem. The algorithm leverages techniques from digital signal processing, notably Gaussian and Dolph-Chebyshev filters. Unlike the typical approach to this problem, our algorithm is not iterative. That is, instead of estimating \"large\" coefficients, subtracting them and recursing on the reminder, it identifies and estimates the k largest coefficients in \"one shot\", in a manner akin to sketching/streaming algorithms. The resulting algorithm is structurally simpler than its predecessors. As a consequence, we are able to extend considerably the range of sparsity, k, for which the algorithm is faster than FFT, both in theory and practice.
Stochastic approximation learning for mixtures of multivariate elliptical distributions Most of the current approaches to mixture modeling consider mixture components from a few families of probability distributions, in particular from the Gaussian family. The reasons of these preferences can be traced to their training algorithms, typically versions of the Expectation-Maximization (EM) method. The re-estimation equations needed by this method become very complex as the mixture components depart from the simplest cases. Here we propose to use a stochastic approximation method for probabilistic mixture learning. Under this method it is straightforward to train mixtures composed by a wide range of mixture components from different families. Hence, it is a flexible alternative for mixture learning. Experimental results are presented to show the probability density and missing value estimation capabilities of our proposal.
1.205267
0.016638
0.002284
0.001637
0.00125
0.000455
0.000118
0.000012
0.000002
0
0
0
0
0
Multi-criteria decision making method based on possibility degree of interval type-2 fuzzy number This paper proposes a new approach based on possibility degree to solve multi-criteria decision making (MCDM) problems in which the criteria value takes the form of interval type-2 fuzzy number. First, a new expected value function is defined and an optimal model based on maximizing deviation method is constructed to obtain weight coefficients when criteria weight information is partially known. Then, the overall value of each alternative is calculated by the defined aggregation operators. Furthermore, a new possibility degree, which is proposed to overcome some drawbacks of the existing methods, is introduced for comparisons between the overall values of alternatives to construct a possibility degree matrix. Based on the constructed matrix, all of the alternatives are ranked according to the ranking vector derived from the matrix, and the best one is selected. Finally, the proposed method is applied to a case study on the overseas minerals investment for one of the largest multi-species nonferrous metals companies in China and the results demonstrate the feasibility of the method.
Some new distance measures for type-2 fuzzy sets and distance measure based ranking for group decision making problems In this paper, we propose some distance measures between type-2 fuzzy sets, and also a new family of utmost distance measures are presented. Several properties of different proposed distance measures have been introduced. Also, we have introduced a new ranking method for the ordering of type-2 fuzzy sets based on the proposed distance measure. The proposed ranking method satisfies the reasonable properties for the ordering of fuzzy quantities. Some properties such as robustness, order relation have been presented. Limitations of existing ranking methods have been studied. Further for practical use, a new method for selecting the best alternative, for group decision making problems is proposed. This method is illustrated with a numerical example.
Fuzzy multi attribute group decision making method to achieve consensus under the consideration of degrees of confidence of experts' opinions The aim of this paper is to introduce a fuzzy multi attribute group decision making technique considering the degrees of confidence of experts' opinions. In the process of decision making, each expert provides his/her evaluation over the alternatives depending on a finite set of attributes and constructs an individual fuzzy decision matrix. The proposed technique establishes an iterative process to aggregate the fuzzy information, given by individual expert, into group consensus opinion by using the fuzzy similarity measure. Then, based on group consensus opinion, the proposed approach utilizes the fuzzy similarity measure to find out the most desirable alternative through approximate reasoning. The proposed decision making technique is more flexible due to the fact that it considers the degrees of confidence of experts' opinions. Finally an example has been shown to highlight the proposed methodology.
An interval type-2 fuzzy LINMAP method with approximate ideal solutions for multiple criteria decision analysis. The purpose of this paper is to develop a linear programming technique for multidimensional analysis of preference (LINMAP) to address multiple criteria decision analysis problems within the interval type-2 fuzzy environment based on interval type-2 trapezoidal fuzzy numbers. Considering the issue of anchor dependency, we use multiple anchor points in the decision-making process and employ approximate positive-ideal and negative-ideal solutions as the points of reference. Selected useful properties of the approximate ideal solutions are also investigated. In contrast to the classical LINMAP methods, this paper directly generates approximate ideal solutions from the characteristics of all alternatives. Next, this work presents the concept of closeness-based indices using Minkowski distances with approximate ideal solutions to develop a new approach for determining measurements of consistency and inconsistency. Under incomplete preference information on paired comparisons of the alternatives, this paper provides a novel method that uses the concept of comprehensive closeness-based indices to measure the poorness of fit and the goodness of fit. By applying the consistency indices and inconsistency indices, this work formulates an optimization problem that can be solved for the optimal weights of the criteria and thus acquires the best compromise alternative. Additionally, this paper explores the problem of supplier selection and conducts a comparative discussion to validate the effectiveness and applicability of the proposed interval type-2 fuzzy LINMAP method with approximate ideal solutions. Furthermore, the proposed method is applied to address a marketplace decision difficulty (MPDD)-prone decision-making problem to provide additional contributions for practical implications.
Optimization of interval type-2 fuzzy systems for image edge detection. •The optimization of the antecedent parameters for a type 2 fuzzy system of edge detection is presented.•The goal of interval type-2 fuzzy logic in edge detection methods is to provide the ability to handle uncertainty.•Results show that the Cuckoo search provides better results in optimizing the type-2 fuzzy system.
An extended VIKOR method based on prospect theory for multiple attribute decision making under interval type-2 fuzzy environment Interval type-2 fuzzy set (IT2FS) offers interesting avenue to handle high order information and uncertainty in decision support system (DSS) when dealing with both extrinsic and intrinsic aspects of uncertainty. Recently, multiple attribute decision making (MADM) problems with interval type-2 fuzzy information have received increasing attentions both from researchers and practitioners. As a result, a number of interval type-2 fuzzy MADM methods have been developed. In this paper, we extend the VIKOR (VlseKriterijumska Optimizacijia I Kompromisno Resenje, in Serbian) method based on the prospect theory to accommodate interval type-2 fuzzy circumstances. First, we propose a new distance measure for IT2FS, which is comes as a sound alternative when being compared with the existing interval type-2 fuzzy distance measures. Then, a decision model integrating VIKOR method and prospect theory is proposed. A case study concerning a high-tech risk evaluation is provided to illustrate the applicability of the proposed method. In addition, a comparative analysis with interval type-2 fuzzy TOPSIS method is also presented.
Fuzzy multiple criteria forestry decision making based on an integrated VIKOR and AHP approach Forestation and forest preservation in urban watersheds are issues of vital importance as forested watersheds not only preserve the water supplies of a city but also contribute to soil erosion prevention. The use of fuzzy multiple criteria decision aid (MCDA) in urban forestation has the advantage of rendering subjective and implicit decision making more objective and transparent. An additional merit of fuzzy MCDA is its ability to accommodate quantitative and qualitative data. In this paper an integrated VIKOR-AHP methodology is proposed to make a selection among the alternative forestation areas in Istanbul. In the proposed methodology, the weights of the selection criteria are determined by fuzzy pairwise comparison matrices of AHP. It is found that Omerli watershed is the most appropriate forestation district in Istanbul.
Robot selection by using generalized interval-valued fuzzy numbers with TOPSIS. The aim of this paper is to propose a method to aggregate the opinion of several decision makers on different criteria, regarding a set of alternatives, where the judgment of the decision makers are represented by generalized interval-valued trapezoidal fuzzy numbers. A generalized interval valued trapezoidal fuzzy number based technique for order preference by similarity to ideal solution is proposed that can reflect subjective judgment and objective information in real life. The weights of criteria and performance rating values of criteria are linguistic variables expressed as generalized interval-valued trapezoidal fuzzy numbers. Finally, an illustrative example is provided to elaborate the proposed method for the selection of a suitable robot according to our requirements. (C) 2014 Elsevier B.V. All rights reserved.
Type-1 OWA operators for aggregating uncertain information with uncertain weights induced by type-2 linguistic quantifiers The OWA operator proposed by Yager has been widely used to aggregate experts' opinions or preferences in human decision making. Yager's traditional OWA operator focuses exclusively on the aggregation of crisp numbers. However, experts usually tend to express their opinions or preferences in a very natural way via linguistic terms. These linguistic terms can be modelled or expressed by (type-1) fuzzy sets. In this paper, we define a new type of OWA operator, the type-1 OWA operator that works as an uncertain OWA operator to aggregate type-1 fuzzy sets with type-1 fuzzy weights, which can be used to aggregate the linguistic opinions or preferences in human decision making with linguistic weights. The procedure for performing type-1 OWA operations is analysed. In order to identify the linguistic weights associated to the type-1 OWA operator, type-2 linguistic quantifiers are proposed. The problem of how to derive linguistic weights used in type-1 OWA aggregation given such type of quantifier is solved. Examples are provided to illustrate the proposed concepts.
A generalization of the power aggregation operators for linguistic environment and its application in group decision making We introduce a wide range of linguistic generalized power aggregation operators. First, we present the generalized power average (GPA) operator and the generalized power ordered weighted average (GPOWA) operator. Then we extend the GPA operator and the GPOWA operator to linguistic environment and propose the linguistic generalized power average (LGPA) operator, the weighted linguistic generalized power average (WLGPA) operator and the linguistic generalized power ordered weighted average (LGPOWA) operator, which are aggregation functions that use linguistic information and generalized mean in the power average (PA) operator. We give their particular cases such as the linguistic power ordered weighted average (LPOWA) operator, the linguistic power ordered weighted geometric average (LPOWGA) operator, the linguistic power ordered weighted harmonic average (LPOWHA) operator and the linguistic power ordered weighted quadratic average (LPOWQA) operator. Finally, we develop an application of the new approach in a multiple attribute group decision making problem concerning the evaluation of university faculty for tenure and promotion.
An Approach To Interval-Valued R-Implications And Automorphisms The aim of this work is to introduce an approach for interval-valued R-implications, which satisfy some analogous properties of R-implications. We show that the best interval representation of an R-implication that is obtained from a left continuous t-norm coincides with the interval-valued R-implication obtained from the best interval representation of such t-norm, whenever this is an inclusion monotonic interval function. This provides, under this condition, a nice characterization for the best interval representation of an R-implication, which is also an interval-valued R-implication. We also introduce interval-valued automorphisms as the best interval representations of automorphisms. It is shown that interval automorphisms act on interval R-implications, generating other interval R-implications.
Mixed-signal parallel compressed sensing and reception for cognitive radio A parallel structure to do spectrum sensing in cognitive radio (CR) at sub-Nyquist rate is proposed. The structure is based on compressed sensing (CS) that exploits the sparsity of frequency utilization. Specifically, the received analog signal is segmented or time-windowed and CS is applied to each segment independently using an analog implementation of the inner product, then all the samples are processed together to reconstruct the signal. Applying the CS framework to the analog signal directly relaxes the requirements in wideband RF receiver front-ends. Moreover, the parallel structure provides a design flexibility and scalability on the sensing rate and system complexity. This paper also provides a joint reconstruction algorithm that optimally detects the information symbols from the sub-Nyquist analog projection coefficients. Simulations showing the efficiency of the proposed approach are also presented.
Opportunistic Interference Mitigation Achieves Optimal Degrees-of-Freedom in Wireless Multi-Cell Uplink Networks We introduce an opportunistic interference mitigation (OIM) protocol, where a user scheduling strategy is utilized in K-cell uplink networks with time-invariant channel coefficients and base stations (BSs) having M antennas. Each BS opportunistically selects a set of users who generate the minimum interference to the other BSs. Two OIM protocols are shown according to the number of simultaneously transmitting users per cell, S: opportunistic interference nulling (OIN) and opportunistic interference alignment (OIA). Then, their performance is analyzed in terms of degrees-of-freedom (DoFs). As our main result, it is shown that KM DoFs are achievable under the OIN protocol with M selected users per cell, if the total number of users in a cell, N, scales at least as SNR(K-1)M. Similarly, it turns out that the OIA scheme with S(<;M) selected users achieves KS DoFs, if N scales faster than SNR(K-1)S. These results indicate that there exists a trade-off between the achievable DoFs and the minimum required N. By deriving the corresponding upper bound on the DoFs, it is shown that the OIN scheme is DoF-optimal. Finally, numerical evaluation, a two-step scheduling method, and the extension to multi-carrier scenarios are shown.
Path Criticality Computation in Parameterized Statistical Timing Analysis Using a Novel Operator This paper presents a method to compute criticality probabilities of paths in parameterized statistical static timing analysis. We partition the set of all the paths into several groups and formulate the path criticality into a joint probability of inequalities. Before evaluating the joint probability directly, we simplify the inequalities through algebraic elimination, handling topological correlation. Our proposed method uses conditional probabilities to obtain the joint probability, and statistics of random variables representing process parameters are changed to take into account the conditions. To calculate the conditional statistics of the random variables, we derive analytic formulas by extending Clark's work. This allows us to obtain the conditional probability density function of a path delay, given the path is critical, as well as to compute criticality probabilities of paths. Our experimental results show that the proposed method provides 4.2X better accuracy on average in comparison to the state-of-art method.
1.016137
0.016
0.0157
0.013333
0.013333
0.008949
0.005971
0.002556
0.00072
0.000077
0
0
0
0
Approximation of Quantities of Interest in Stochastic PDEs by the Random Discrete L2 Projection on Polynomial Spaces. In this work we consider the random discrete L-2 projection on polynomial spaces ( hereafter RDP) for the approximation of scalar quantities of interest (QOIs) related to the solution of a partial differential equation model with random input parameters. In the RDP technique the QOI is first computed for independent samples of the random input parameters, as in a standard Monte Carlo approach, and then the QOI is approximated by a multivariate polynomial function of the input parameters using a discrete least squares approach. We consider several examples including the Darcy equations with random permeability, the linear elasticity equations with random elastic coefficient, and the Navier-Stokes equations in random geometries and with random fluid viscosity. We show that the RDP technique is well suited to QOIs that depend smoothly on a moderate number of random parameters. Our numerical tests confirm the theoretical findings in [ G. Migliorati, F. Nobile, E. von Schwerin, and R. Tempone, Analysis of the Discrete L-2 Projection on Polynomial Spaces with Random Evaluations, MOX report 46-2011, Politecnico di Milano, Milano, Italy, submitted], which have shown that, in the case of a single uniformly distributed random parameter, the RDP technique is stable and optimally convergent if the number of sampling points is proportional to the square of the dimension of the polynomial space. Here optimality means that the weighted L-2 norm of the RDP error is bounded from above by the best L-infinity error achievable in the given polynomial space, up to logarithmic factors. In the case of several random input parameters, the numerical evidence indicates that the condition on quadratic growth of the number of sampling points could be relaxed to a linear growth and still achieve stable and optimal convergence. This makes the RDP technique very promising for moderately high dimensional uncertainty quantification.
On Discrete Least-Squares Projection in Unbounded Domain with Random Evaluations and its Application to Parametric Uncertainty Quantification. This work is concerned with approximating multivariate functions in an unbounded domain by using a discrete least-squares projection with random point evaluations. Particular attention is given to functions with random Gaussian or gamma parameters. We first demonstrate that the traditional Hermite (Laguerre) polynomials chaos expansion suffers from the instability in the sense that an unfeasible number of points, which is relevant to the dimension of the approximation space, is needed to guarantee the stability in the least-squares framework. We then propose to use the Hermite/Laguerre functions (rather than polynomials) as bases in the expansion. The corresponding design points are obtained by mapping the uniformly distributed random points in bounded intervals to the unbounded domain, which involved a mapping parameter L. By using the Hermite/Laguerre functions and a proper mapping parameter, the stability can be significantly improved even if the number of design points scales linearly (up to a logarithmic factor) with the dimension of the approximation space. Apart from the stability, another important issue is the rate of convergence. To speed up the convergence, an effective scaling factor is introduced, and a principle for choosing quasi-optimal scaling factor is discussed. Applications to parametric uncertainty quantification are illustrated by considering a random ODE model together with an elliptic problem with lognormal random input.
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
On Sparse Interpolation and the Design of Deterministic Interpolation Points. Motivated by uncertainty quantification and compressed sensing, we build up in this paper the framework for sparse interpolation. The main contribution of this work is twofold: (i) we investigate the theoretical limit of the number of unisolvent points for sparse interpolation under a general setting, and explore the relation between the classical interpolation and the sparse interpolation; (ii) we discuss the design of the interpolation points for the sparse multivariate polynomial expansions, for which the possible applications include uncertainty quantification and compressed sensing. Unlike the traditional random sampling method, we present in this paper a deterministic method to produce the interpolation points, and show its performance with l(1) minimization by analyzing the mutual incoherence of the interpolation matrix. Numerical experiments show that the deterministic points have a similar performance to that of the random points.
An efficient surrogate-based method for computing rare failure probability In this paper, we present an efficient numerical method for evaluating rare failure probability. The method is based on a recently developed surrogate-based method from Li and Xiu [J. Li, D. Xiu, Evaluation of failure probability via surrogate models, J. Comput. Phys. 229 (2010) 8966-8980] for failure probability computation. The method by Li and Xiu is of hybrid nature, in the sense that samples of both the surrogate model and the true physical model are used, and its efficiency gain relies on using only very few samples of the true model. Here we extend the capability of the method to rare probability computation by using the idea of importance sampling (IS). In particular, we employ cross-entropy (CE) method, which is an effective method to determine the biasing distribution in IS. We demonstrate that, by combining with the CE method, a surrogate-based IS algorithm can be constructed and is highly efficient for rare failure probability computation-it incurs much reduced simulation efforts compared to the traditional CE-IS method. In many cases, the new method is capable of capturing failure probability as small as 10^-^1^2~10^-^6 with only several hundreds samples.
Analysis of Discrete L2 Projection on Polynomial Spaces with Random Evaluations. We analyze the problem of approximating a multivariate function by discrete least-squares projection on a polynomial space starting from random, noise-free observations. An area of possible application of such technique is uncertainty quantification for computational models. We prove an optimal convergence estimate, up to a logarithmic factor, in the univariate case, when the observation points are sampled in a bounded domain from a probability density function bounded away from zero and bounded from above, provided the number of samples scales quadratically with the dimension of the polynomial space. Optimality is meant in the sense that the weighted L-2 norm of the error committed by the random discrete projection is bounded with high probability from above by the best L-infinity error achievable in the given polynomial space, up to logarithmic factors. Several numerical tests are presented in both the univariate and multivariate cases, confirming our theoretical estimates. The numerical tests also clarify how the convergence rate depends on the number of sampling points, on the polynomial degree, and on the smoothness of the target function.
High-Dimensional Adaptive Sparse Polynomial Interpolation and Applications to Parametric PDEs We consider the problem of Lagrange polynomial interpolation in high or countably infinite dimension, motivated by the fast computation of solutions to partial differential equations (PDEs) depending on a possibly large number of parameters which result from the application of generalised polynomial chaos discretisations to random and stochastic PDEs. In such applications there is a substantial advantage in considering polynomial spaces that are sparse and anisotropic with respect to the different parametric variables. In an adaptive context, the polynomial space is enriched at different stages of the computation. In this paper, we study an interpolation technique in which the sample set is incremented as the polynomial dimension increases, leading therefore to a minimal amount of PDE solving. This construction is based on the standard principle of tensorisation of a one-dimensional interpolation scheme and sparsification. We derive bounds on the Lebesgue constants for this interpolation process in terms of their univariate counterpart. For a class of model elliptic parametric PDE's, we have shown in Chkifa et al. (Modél. Math. Anal. Numér. 47(1):253---280, 2013 ) that certain polynomial approximations based on Taylor expansions converge in terms of the polynomial dimension with an algebraic rate that is robust with respect to the parametric dimension. We show that this rate is preserved when using our interpolation algorithm. We also propose a greedy algorithm for the adaptive selection of the polynomial spaces based on our interpolation scheme, and illustrate its performance both on scalar valued functions and on parametric elliptic PDE's.
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. We discuss the arbitrary polynomial chaos (aPC), which has been subject of research in a few recent theoretical papers. Like all polynomial chaos expansion techniques, aPC approximates the dependence of simulation model output on model parameters by expansion in an orthogonal polynomial basis. The aPC generalizes chaos expansion techniques towards arbitrary distributions with arbitrary probability measures, which can be either discrete, continuous, or discretized continuous and can be specified either analytically (as probability density/cumulative distribution functions), numerically as histogram or as raw data sets. We show that the aPC at finite expansion order only demands the existence of a finite number of moments and does not require the complete knowledge or even existence of a probability density function. This avoids the necessity to assign parametric probability distributions that are not sufficiently supported by limited available data. Alternatively, it allows modellers to choose freely of technical constraints the shapes of their statistical assumptions. Our key idea is to align the complexity level and order of analysis with the reliability and detail level of statistical information on the input parameters. We provide conditions for existence and clarify the relation of the aPC to statistical moments of model parameters. We test the performance of the aPC with diverse statistical distributions and with raw data. In these exemplary test cases, we illustrate the convergence with increasing expansion order and, for the first time, with increasing reliability level of statistical input information. Our results indicate that the aPC shows an exponential convergence rate and converges faster than classical polynomial chaos expansion techniques.
Numerical approach for quantification of epistemic uncertainty In the field of uncertainty quantification, uncertainty in the governing equations may assume two forms: aleatory uncertainty and epistemic uncertainty. Aleatory uncertainty can be characterised by known probability distributions whilst epistemic uncertainty arises from a lack of knowledge of probabilistic information. While extensive research efforts have been devoted to the numerical treatment of aleatory uncertainty, little attention has been given to the quantification of epistemic uncertainty. In this paper, we propose a numerical framework for quantification of epistemic uncertainty. The proposed methodology does not require any probabilistic information on uncertain input parameters. The method only necessitates an estimate of the range of the uncertain variables that encapsulates the true range of the input variables with overwhelming probability. To quantify the epistemic uncertainty, we solve an encapsulation problem, which is a solution to the original governing equations defined on the estimated range of the input variables. We discuss solution strategies for solving the encapsulation problem and the sufficient conditions under which the numerical solution can serve as a good estimator for capturing the effects of the epistemic uncertainty. In the case where probability distributions of the epistemic variables become known a posteriori, we can use the information to post-process the solution and evaluate solution statistics. Convergence results are also established for such cases, along with strategies for dealing with mixed aleatory and epistemic uncertainty. Several numerical examples are presented to demonstrate the procedure and properties of the proposed methodology.
Mathematical Foundations of Computer Science 1989, MFCS'89, Porabka-Kozubnik, Poland, August 28 - September 1, 1989, Proceedings
Video Transport Evaluation With H.264 Video Traces. The performance evaluation of video transport mechanisms becomes increasingly important as encoded video accounts for growing portions of the network traffic. Compared to the widely studied MPEG-4 encoded video, the recently adopted H.264 video coding standards include novel mechanisms, such as hierarchical B frame prediction structures and highly efficient quality scalable coding, that have impor...
Frequency domain subspace-based identification of discrete-time singular power spectra In this paper, we propose a subspace algorithm for the identification of linear-time-invariant discrete-time systems with more outputs than inputs from measured power spectrum data. The proposed identification algorithm is interpolatory and strongly consistent when the corruptions in the spectrum measurements have a bounded covariance function. Asymptotic performance and the interpolation properties of the proposed algorithm are illustrated by means of a numerical example.
Bounding the Dynamic Behavior of an Uncertain System via Polynomial Chaos-based Simulation Parametric uncertainty can represent parametric tolerance, parameter noise or parameter disturbances. The effects of these uncertainties on the time evolution of a system can be extremely significant, mostly when studying closed-loop operation of control systems. The presence of uncertainty makes the modeling process challenging, since it is impossible to express the behavior of the system with a deterministic approach. If the uncertainties can be defined in terms of probability density function, probabilistic approaches can be adopted. In many cases, the most useful aspect is the evaluation of the worst-case scenario, thus limiting the problem to the evaluation of the boundary of the set of solutions. This is particularly true for the analysis of robust stability and performance of a closed-loop system. The goal of this paper is to demonstrate how the polynomial chaos theory (PCT) can simplify the determination of the worst-case scenario, quickly providing the boundaries in time domain. The proposed approach is documented with examples and with the description of the Maple worksheet developed by the authors for the automatic processing in the PCT framework.
1.015421
0.015813
0.014286
0.007946
0.007143
0.005304
0.002184
0.000332
0.000051
0.000002
0
0
0
0
Metamodelling with independent and dependent inputs. In the cases of computationally expensive models the metamodelling technique which maps inputs and outputs is a very useful and practical way of making computations tractable. A number of new techniques which improve the efficiency of the Random Sampling-High dimensional model representation (RS-HDMR) for models with independent and dependent input variables are presented. Two different metamodelling methods for models with dependent input variables are compared. Both techniques are based on a Quasi Monte Carlo variant of RS-HDMR. The first technique makes use of transformation of the dependent input vector into a Gaussian independent random vector and then applies the decomposition of the model using a tensored Hermite polynomial basis. The second approach uses a direct decomposition of the model function into a basis which consists of the marginal distributions of input components and their joint distribution. For both methods the copula formalism is used. Numerical tests prove that the developed methods are robust and efficient.
Using sparse polynomial chaos expansions for the global sensitivity analysis of groundwater lifetime expectancy in a multi-layered hydrogeological model. The study makes use of polynomial chaos expansions to compute Sobol׳ indices within the frame of a global sensitivity analysis of hydro-dispersive parameters in a simplified vertical cross-section of a segment of the subsurface of the Paris Basin. Applying conservative ranges, the uncertainty in 78 input variables is propagated upon the mean lifetime expectancy of water molecules departing from a specific location within a highly confining layer situated in the middle of the model domain. Lifetime expectancy is a hydrogeological performance measure pertinent to safety analysis with respect to subsurface contaminants, such as radionuclides. The sensitivity analysis indicates that the variability in the mean lifetime expectancy can be sufficiently explained by the uncertainty in the petrofacies, i.e. the sets of porosity and hydraulic conductivity, of only a few layers of the model. The obtained results provide guidance regarding the uncertainty modeling in future investigations employing detailed numerical models of the subsurface of the Paris Basin. Moreover, the study demonstrates the high efficiency of sparse polynomial chaos expansions in computing Sobol׳ indices for high-dimensional models.
A least-squares method for sparse low rank approximation of multivariate functions In this paper, we propose a low rank approximation method based on discrete least-squares for the approximation of a multivariate function from random, noise-free observations. Sparsity inducing regularization techniques are used within classical algorithms for low rank approximation in order to exploit the possible sparsity of low rank approximations. Sparse low rank approximations are constructed with a robust updated greedy algorithm, which includes an optimal selection of regularization parameters and approximation ranks using cross validation techniques. Numerical examples demonstrate the capability of approximating functions of many variables even when very few function evaluations are available, thus proving the interest of the proposed algorithm for the propagation of uncertainties through complex computational models.
Efficient computation of global sensitivity indices using sparse polynomial chaos expansions Global sensitivity analysis aims at quantifying the relative importance of uncertain input variables onto the response of a mathematical model of a physical system. ANOVA-based indices such as the Sobol’ indices are well-known in this context. These indices are usually computed by direct Monte Carlo or quasi-Monte Carlo simulation, which may reveal hardly applicable for computationally demanding industrial models. In the present paper, sparse polynomial chaos (PC) expansions are introduced in order to compute sensitivity indices. An adaptive algorithm allows the analyst to build up a PC-based metamodel that only contains the significant terms whereas the PC coefficients are computed by least-square regression using a computer experimental design. The accuracy of the metamodel is assessed by leave-one-out cross validation. Due to the genuine orthogonality properties of the PC basis, ANOVA-based sensitivity indices are post-processed analytically. This paper also develops a bootstrap technique which eventually yields confidence intervals on the results. The approach is illustrated on various application examples up to 21 stochastic dimensions. Accurate results are obtained at a computational cost 2–3 orders of magnitude smaller than that associated with Monte Carlo simulation.
Global sensitivity analysis using polynomial chaos expansions Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol’ indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol’ indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2–3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol’ indices.
A non-adapted sparse approximation of PDEs with stochastic inputs We propose a method for the approximation of solutions of PDEs with stochastic coefficients based on the direct, i.e., non-adapted, sampling of solutions. This sampling can be done by using any legacy code for the deterministic problem as a black box. The method converges in probability (with probabilistic error bounds) as a consequence of sparsity and a concentration of measure phenomenon on the empirical correlation between samples. We show that the method is well suited for truly high-dimensional problems.
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Stochastic formulation of SPICE-type electronic circuit simulation with polynomial chaos A methodology for efficient tolerance analysis of electronic circuits based on nonsampling stochastic simulation of transients is formulated, implemented, and validated. We model the stochastic behavior of all quantities that are subject to tolerance spectrally with polynomial chaos. A library of stochastic models of linear and nonlinear circuit elements is created. In analogy to the deterministic implementation of the SPICE electronic circuit simulator, the overall stochastic circuit model is obtained using nodal analysis. In the proposed case studies, we analyze the influence of device tolerance on the response of a lowpass filter, the impact of temperature variability on the output of an amplifier, and the effect of changes of the load of a diode bridge on the probability density function of the output voltage. The case studies demonstrate that the novel methodology is computationally faster than the Monte Carlo method and more accurate and flexible than the root-sum-square method. This makes the stochastic circuit simulator, referred to as PolySPICE, a compelling candidate for the tolerance study of reliability-critical electronic circuits.
Efficient algorithm for the computation of on-chip capacitance sensitivities with respect to a large set of parameters Recent CAD methodologies of design-for-manufacturability (DFM) have naturally led to a significant increase in the number of process and layout parameters that have to be taken into account in design-rule checking. Methodological consistency requires that a similar number of parameters be taken into account during layout parasitic extraction. Because of the inherent variability of these parameters, the issue of efficiently extracting deterministic parasitic sensitivities with respect to such a large number of parameters must be addressed. In this paper, we tackle this very issue in the context of capacitance sensitivity extraction. In particular, we show how the adjoint sensitivity method can be efficiently integrated within a finite-difference (FD) scheme to compute the sensitivity of the capacitance with respect to a large set of BEOL parameters. If np is the number of parameters, the speedup of the adjoint method is shown to be a factor of np/2 with respect to direct FD sensitivity techniques. The proposed method has been implemented and verified on a 65 nm BEOL cross section having 10 metal layers and a total number of 59 parameters. Because of its speed, the method can be advantageously used to prune out of the CAD flow those BEOL parameters that yield a capacitance sensitivity less than a given threshold.
Statistical timing analysis for intra-die process variations with spatial correlations Process variations have become a critical issue in performance verification of high-performance designs. We present a new, statistical timing analysis method that accounts for inter- and intra-die process variations and their spatial correlations. Since statistical timing analysis has an exponential run time complexity, we propose a method whereby a statistical bound on the probability distribution function of the exact circuit delay is computed with linear run time. First, we develop a model for representing inter- and intra-die variations and their spatial correlations. Using this model, we then show how gate delays and arrival times can be represented as a sum of components, such that the correlation information between arrival times and gate delays is preserved. We then show how arrival times are propagated and merged in the circuit to obtain an arrival time distribution that is an upper bound on the distribution of the exact circuit delay. We prove the correctness of the bound and also show how the bound can be improved by propagating multiple arrival times. The proposed algorithms were implemented and tested on a set of benchmark circuits under several process variation scenarios. The results were compared with Monte Carlo simulation and show an accuracy of 3.32% on average over all test cases.
Robust traffic anomaly detection with principal component pursuit Principal component analysis (PCA) is a statistical technique that has been used for data analysis and dimensionality reduction. It was introduced as a network traffic anomaly detection technique firstly in [1]. Since then, a lot of research attention has been received, which results in an extensive analysis and several extensions. In [2], the sensitivity of PCA to its tuning parameters, such as the dimension of the low-rank subspace and the detection threshold, on traffic anomaly detection was indicated. However, no explanation on the underlying reasons of the problem was given in [2]. In [3], further investigation on the PCA sensitivity was conducted and it was found that the PCA sensitivity comes from the inability of PCA to detect temporal correlations. Based on this finding, an extension of PCA to Kalman-Loeve expansion (KLE) was proposed in [3]. While KLE shows slight improvement, it still exhibits similar sensitivity issue since a new tuning parameter called temporal correlation range was introduced. Recently, in [4], additional effort was paid to illustrate the PCA-poisoning problem. To underline this problem, an evading strategy called Boiled-Frog was proposed which adds a high fraction of outliers to the traffic. To defend against this, the authors employed a more robust version of PCA called PCA-GRID. While PCA-GRID shows performance improvement regarding the robustness to the outliers, it experiences a high sensitivity to the threshold estimate and the k-dimensional subspace that maximizes the dispersion of the data. The purpose of this work is to consider another technique to address the PCA poisoning problems to provide robust traffic anomaly detection: The Principal Component Pursuit.
Uncertainty bounds and their use in the design of interval type-2 fuzzy logic systems We derive inner- and outer-bound sets for the type-reduced set of an interval type-2 fuzzy logic system (FLS), based on a new mathematical interpretation of the Karnik-Mendel iterative procedure for computing the type-reduced set. The bound sets can not only provide estimates about the uncertainty contained in the output of an interval type-2 FLS, but can also be used to design an interval type-2 FLS. We demonstrate, by means of a simulation experiment, that the resulting system can operate without type-reduction and can achieve similar performance to one that uses type-reduction. Therefore, our new design method, based on the bound sets, can relieve the computation burden of an interval type-2 FLS during its operation, which makes an interval type-2 FLS useful for real-time applications.
Interval-valued fuzzy line graphs. In this paper, we introduce the concept of interval-valued fuzzy line graphs and discuss some of their properties. We prove a necessary and sufficient condition for an interval-valued fuzzy graph to be isomorphic to its corresponding interval-valued fuzzy line graph. We determine when an isomorphism between two investigate fuzzy graphs follows from an isomorphism of their corresponding investigate fuzzy line graphs. We state some applications of interval-valued fuzzy line graphs in database theory, expert systems, neural networks, decision making problems, and geographical information systems.
Stochastic approximation learning for mixtures of multivariate elliptical distributions Most of the current approaches to mixture modeling consider mixture components from a few families of probability distributions, in particular from the Gaussian family. The reasons of these preferences can be traced to their training algorithms, typically versions of the Expectation-Maximization (EM) method. The re-estimation equations needed by this method become very complex as the mixture components depart from the simplest cases. Here we propose to use a stochastic approximation method for probabilistic mixture learning. Under this method it is straightforward to train mixtures composed by a wide range of mixture components from different families. Hence, it is a flexible alternative for mixture learning. Experimental results are presented to show the probability density and missing value estimation capabilities of our proposal.
1.070444
0.035556
0.014222
0.00824
0.002519
0.001208
0.000167
0.000037
0.000003
0
0
0
0
0
SIMULATING THE LONG TERM EVOLUTION PHYSICAL LAYER Research and development of signal processing algo- rithms for UMTS Long Term Evolution (LTE) requires a realistic, exible, and standard-compliant simulation en- vironment. To facilitate comparisons with work of other research groups such a simulation environment should ideally be publicly available. In this paper, we present a MATLAB-based down- link physical-layer simulator for LTE. We identify dier- ent research applications that are covered by our simula- tor. Depending on the research focus, the simulator of- fers to carry out single-downlink, single-cell multi-user, and multi-cell multi-user simulations. By utilizing the Parallel Computing Toolbox of MATLAB, the simula- tor can eciently be executed on multi-core processors to signicantly reduce the simulation time.
MIMO technologies in 3GPP LTE and LTE-advanced 3rd Generation Partnership Project (3GPP) has recently completed the specification of the Long Term Evolution (LTE) standard. Majority of the world's operators and vendors are already committed to LTE deployments and developments, making LTE the market leader in the upcoming evolution to 4G wireless communication systems. Multiple input multiple output (MIMO) technologies introduced in LTE such as spatial multiplexing, transmit diversity, and beamforming are key components for providing higher peak rate at a better system efficiency, which are essential for supporting future broadband data service over wireless links. Further extension of LTE MIMO technologies is being studied under the 3GPP study item "LTE-Advanced" to meet the requirement of IMT-Advanced set by International Telecommunication Union Radiocommunication Sector (ITU-R). In this paper, we introduce various MIMO technologies employed in LTE and provide a brief overview on the MIMO technologies currently discussed in the LTE-Advanced forum.
On the way towards fourth-generation mobile: 3GPP LTE and LTE-advanced Long-TermEvolution (LTE) is the new standard recently specified by the 3GPP on the way towards fourth-generation mobile. This paper presents the main technical features of this standard as well as its performance in terms of peak bit rate and average cell throughput, among others. LTE entails a big technological improvement as compared with the previous 3G standard. However, this paper also demonstrates that LTE performance does not fulfil the technical requirements established by ITU-R to classify one radio access technology as a member of the IMT-Advanced family of standards. Thus, this paper describes the procedure followed by the 3GPP to address these challenging requirements. Through the design and optimization of new radio access techniques and a further evolution of the system, the 3GPP is laying down the foundations of the future LTE-Advanced standard, the 3GPP candidate for 4G. This paper offers a brief insight into these technological trends.
Fading correlation and its effect on the capacity of multielement antenna systems We investigate the effects of fading correlations in multielement antenna (MEA) communication systems. Pioneering studies showed that if the fades connecting pairs of transmit and receive antenna elements are independently, identically distributed, MEAs offer a large increase in capacity compared to single-antenna systems. An MEA system can be described in terms of spatial eigenmodes, which are single-input single-output subchannels. The channel capacity of an MEA is the sum of capacities of these subchannels. We show that the fading correlation affects the MEA capacity by modifying the distributions of the gains of these subchannels. The fading correlation depends on the physical parameters of MEA and the scatterer characteristics. In this paper, to characterize the fading correlation, we employ an abstract model, which is appropriate for modeling narrow-band Rayleigh fading in fixed wireless systems
Wireless Communication
Symmetrical frame discard method for 3D video over IP networks Three dimensional (3D) video is expected to be an important application for broadcast and IP streaming services. One of the main limitations for the transmission of 3D video over IP networks is network bandwidth mismatch due to the large size of 3D data, which causes fatal decoding errors and mosaic-like damage. This paper presents a novel selective frame discard method to address the problem. The main idea of the proposed method is the symmetrical discard of the two dimensional (2D) video frame and the depth map frame, which enables the efficient utilization of the network bandwidth. Also, the frames to be discarded are selected after additional consideration of the playback deadline, the network bandwidth, and the inter-frame dependency relationship within a group of pictures (GOP). The simulation results demonstrate that the proposed method enhances the media quality of 3D video streaming even in the case of bad network conditions. The proposed method is expected to be used for Internet protocol (IP) based 3D video streaming applications such as 3D IPTV.
A New EDI-based Deinterlacing Algorithm In this paper, we propose a new deinterlacing algorithm using edge direction field, edge parity, and motion expansion scheme. The algorithm consists of an EDI (edge dependent interpolation)-based intra-field deinterlacing and inter-field deinterlacing that uses block-based motion detection. Most of the EDI algorithms use pixel-by-pixel or block-by-block distance to estimate the edge direction, which results in many annoying artifacts. We propose the edge direction field, and estimate an interpolation direction using the field and SAD (sum of absolute differences) values. The edge direction field is a set of edge orientations and their gradient magnitudes. The proposed algorithm assumes that a local minimum around the gradient edge field is most probably the true edge direction. Our approach provides good visual results in various kinds of edges (horizontal, narrow and weak). And we propose a new temporal interpolation method based on block motion detection. The algorithm works reliably in scenes which have very fast moving objects and low SNR signals. Experimental results on various data set show that the proposed algorithm works well for the diverse kinds of sequences and reconstructs flicker-free details in the static region1.
A comprehensive database and subjective evaluation methodology for quality of experience in stereoscopic video While objective and subjective quality assessment of 2D images and video have been an active research topic in the recent years, emerging 3D technologies require new quality metrics and methodologies taking into account the fundamental differences in the human visual perception and typical distortions of stereoscopic content. Therefore, this paper presents a comprehensive stereoscopic video database that contains a large variety of scenes captured using a stereoscopic camera setup consisting of two HD camcorders with different capture parameters. In addition to the video, the database also provides subjective quality scores obtained using a tailored single stimulus continuous quality scale (SSCQS) method. The resulting mean opinion scores can be used to evaluate the performance of visual quality metrics as well as for the comparison and for the design of new metrics.
Waiting times in quality of experience for web based services A considerable share of applications such as web or e-mail browsing, online picture viewing and file downloads imply waiting times for their users, which is due to the turn-taking of information requests by the user and correspoding response times until each request is fulfilled. Thus, end-user quality perception in the context of interactive data services is dominated by waiting times; the longer the latter, the less satisfied the user becomes. As opposed to heavily researched multimedia experience, perception of waiting times is still not strongly explored in the context of Quality of Experience (QoE). This tutorial will contribute to closing this gap. In its first part, it addresses perception principles and discusses their applicability towards fundamental relationships between waiting times and resulting QoE. It then investigates to which extent the same relationships can also be used to describe QoE for more complex services such as web browsing. Finally, it discusses applications where waiting times determine QoE, amongst other factors. For example, the past shift from UDP media streaming to TCP media streaming (e.g. youtube.com) has extended the relevance of waiting times also to the domain of online video services. In particular, user-perceived quality suffers from initial delays when applications are launched, as well as from freezes during the delivery of the stream. These aspects, which have to be traded against each other to some extent, will be discussed mainly for HTTP video streaming in the last part of this tutorial.
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
Block-sparse signals: uncertainty relations and efficient recovery We consider efficient methods for the recovery of block-sparse signals--i.e., sparse signals that have nonzero entries occurring in clusters--from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce. We then show that a block-version of the orthogonal matching pursuit algorithm recovers block k-sparse signals in no more than k steps if the block-coherence is sufficiently small. The same condition on block-coherence is shown to guarantee successful recovery through a mixed l2/l1-optimization approach. This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
The collapsing method of defuzzification for discretised interval type-2 fuzzy sets This paper proposes a new approach for defuzzification of interval type-2 fuzzy sets. The collapsing method converts an interval type-2 fuzzy set into a type-1 representative embedded set (RES), whose defuzzified values closely approximates that of the type-2 set. As a type-1 set, the RES can then be defuzzified straightforwardly. The novel representative embedded set approximation (RESA), to which the method is inextricably linked, is expounded, stated and proved within this paper. It is presented in two forms: Simple RESA: this approximation deals with the most simple interval FOU, in which a vertical slice is discretised into 2 points. Interval RESA: this approximation concerns the case in which a vertical slice is discretised into 2 or more points. The collapsing method (simple RESA version) was tested for accuracy and speed, with excellent results on both criteria. The collapsing method proved more accurate than the Karnik-Mendel iterative procedure (KMIP) for an asymmetric test set. For both a symmetric and an asymmetric test set, the collapsing method outperformed the KMIP in relation to speed.
The n-dimensional fuzzy sets and Zadeh fuzzy sets based on the finite valued fuzzy sets The connections among the n-dimensional fuzzy set, Zadeh fuzzy set and the finite-valued fuzzy set are established in this paper. The n-dimensional fuzzy set, a special L-fuzzy set, is first defined. It is pointed out that the n-dimensional fuzzy set is a generalization of the Zadeh fuzzy set, the interval-valued fuzzy set, the intuitionistic fuzzy set, the interval-valued intuitionistic fuzzy set and the three dimensional fuzzy set. Then, the definitions of cut set on n-dimensional fuzzy set and n-dimensional vector level cut set of Zadeh fuzzy set are presented. The cut set of the n-dimensional fuzzy set and n-dimensional vector level set of the Zadeh fuzzy set are both defined as n+1-valued fuzzy sets. It is shown that a cut set defined in this way has the same properties as a normal cut set of the Zadeh fuzzy set. Finally, by the use of these cut sets, decomposition and representation theorems of the n-dimensional fuzzy set and new decomposition and representation theorems of the Zadeh fuzzy set are constructed.
For-All Sparse Recovery in Near-Optimal Time. An approximate sparse recovery system in &ell;1 norm consists of parameters k, &epsi;, N; an m-by-N measurement Φ; and a recovery algorithm R. Given a vector, x, the system approximates x by &xwidehat; &equals; R(Φ x), which must satisfy &Vert; &xwidehat;-x&Vert;1 ≤ (1+&epsi;)&Vert; x - xk&Vert;1. We consider the “for all” model, in which a single matrix Φ, possibly “constructed” non-explicitly using the probabilistic method, is used for all signals x. The best existing sublinear algorithm by Porat and Strauss [2012] uses O(&epsi;−3klog (N/k)) measurements and runs in time O(k1 − αNα) for any constant α > 0. In this article, we improve the number of measurements to O(&epsi; − 2klog (N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k1+βpoly(log N,1/&epsi;)), with a modest restriction that k &les; N1 − α and &epsi; &les; (log k/log N)γ for any constants α, β, γ > 0. When k &les; log cN for some c > 0, the runtime is reduced to O(kpoly(N,1/&epsi;)). With no restrictions on &epsi;, we have an approximation recovery system with m &equals; O(k/&epsi;log (N/k)((log N/log k)γ + 1/&epsi;)) measurements. The overall architecture of this algorithm is similar to that of Porat and Strauss [2012] in that we repeatedly use a weak recovery system (with varying parameters) to obtain a top-level recovery algorithm. The weak recovery system consists of a two-layer hashing procedure (or with two unbalanced expanders for a deterministic algorithm). The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages. The idea is to encode the signal position index i by associating it with a unique message mi, which will be encoded to a longer message m′i (in contrast to Porat and Strauss [2012] in which the encoding is simply the identity). Portions of the message m′i correspond to repetitions of the hashing, and we use a regular expander graph to encode the linkages among these portions. The decoding or recovery algorithm consists of recovering the portions of the longer messages m′i and then decoding to the original messages mi, all the while ensuring that corruptions can be detected and/or corrected. The recovery algorithm is similar to list recovery introduced in Indyk et al. [2010] and used in Gilbert et al. [2013]. In our algorithm, the messages &lcub;mi&rcub; are independent of the hashing, which enables us to obtain a better result.
1.046053
0.034367
0.034367
0.017756
0.007025
0.000127
0.000039
0.000008
0.000003
0
0
0
0
0
Minimum error thresholding A computationally efficient solution to the problem of minimum error thresholding is derived under the assumption of object and pixel grey level values being normally distributed. The method is applicable in multithreshold selection.
Adaptive Smoothing: A General Tool for Early Vision A method to smooth a signal while preserving discontinuities is presented. This is achieved by repeatedly convolving the signal with a very small averaging mask weighted by a measure of the signal continuity at each point. Edge detection can be performed after a few iterations, and features extracted from the smoothed signal are correctly localized (hence, no tracking is needed). This last property allows the derivation of a scale-space representation of a signal using the adaptive smoothing parameter k as the scale dimension. The relation of this process to anisotropic diffusion is shown. A scheme to preserve higher-order discontinuities and results on range images is proposed. Different implementations of adaptive smoothing are presented, first on a serial machine, for which a multigrid algorithm is proposed to speed up the smoothing effect, then on a single instruction multiple data (SIMD) parallel machine such as the Connection Machine. Various applications of adaptive smoothing such as edge detection, range image feature extraction, corner detection, and stereo matching are discussed.
Segmentation and estimation of image region properties through cooperative hierarchical computation The task of segmenting an image and that of estimating properties of image regions may be highly interdependent. The goal of segmentation is to partition the image into regions with more or less homogeneous properties; but the processes which estimate these properties should be confined within individual regions. A cooperative, iterative approach to segmentation and property estimation is defined; the results of each process at a given iteration are used to adjust the other process at the next iteration. A linked pyramid structure provides a framework for this process iteration. This hierarchical structure ensures rapid convergence even with strictly local communication between pyramid nodes.
Edge detection using two-dimensional local structure information Local intensity discontinuities, commonly referred to as edges, are important attributes of an image. Many imaging scenarios produce image regions exhibiting complex two-dimensional (2D) local structure, such as when several edges meet to form corners and vertices. Traditional derivative-based edge operators, which typically assume that an edge can be modeled as a one-dimensional (1D) piecewise smooth step function, give misleading results in such situations. Leclerc and Zucker introduced the concept of local structure as an aid for locating intensity discontinuities. They proposed a detailed procedure for detecting discontinuities in a 1D function. They had only given a preliminary version of their scheme, however, for 2D images. Three related edge-detection methods are proposed that draw upon 2D local structural information. The first method greatly expands upon Leclerc and Zucker's 2D method. The other two methods employ a mechanism similar to that used by the maximum-homogeneity filter (a filter used for image enhancement). All three methods permit the detection of multiple edges at a point and have the flexibility to detect edges at differing spatial and angular acuity. Results show that the methods typically perform better than other operators.
Soft clustering of multidimensional data: a semi-fuzzy approach This paper discusses new approaches to unsupervised fuzzy classification of multidimensional data. In the developed clustering models, patterns are considered to belong to some but not necessarily all clusters. Accordingly, such algorithms are called ‘semi-fuzzy’ or ‘soft’ clustering techniques. Several models to achieve this goal are investigated and corresponding implementation algorithms are developed. Experimental results are reported.
Variations, margins, and statistics Design margining is used to account for design uncertainties in the measurement of performance, and thereby ensures that actual manufactured parts will operate in within predicted bounds. As process and environmental variations become increasingly severe and complex in nanometer process technology, design margining overheads have increased correspondingly. This paper describes the types of process and environmental variations, their impact on performance, and the traditional design margining process used to account for these uncertainties. We consider statistical timing (SSTA) in the context of its ability to reduce timing margins through more accurate modeling of variations, and quantify potential benefits of SSTA for setup and hold time margin reduction. Combining SSTA with complementary techniques for systematic variation-aware and voltage-variation-aware timing provides meaningful design margin reduction. We introduce the concept of activity based operating condition as a supporting construct for variation-aware STA flows
NIST Net: a Linux-based network emulation tool Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.
A training algorithm for optimal margin classifiers A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
A review on spectrum sensing for cognitive radio: challenges and solutions Cognitive radio is widely expected to be the next Big Bang in wireless communications. Spectrum sensing, that is, detecting the presence of the primary users in a licensed spectrum, is a fundamental problem for cognitive radio. As a result, spectrum sensing has reborn as a very active research area in recent years despite its long history. In this paper, spectrum sensing techniques from the optimal likelihood ratio test to energy detection, matched filtering detection, cyclostationary detection, eigenvalue-based sensing, joint space-time sensing, and robust sensing methods are reviewed. Cooperative spectrum sensing with multiple receivers is also discussed. Special attention is paid to sensing methods that need little prior information on the source signal and the propagation channel. Practical challenges such as noise power uncertainty are discussed and possible solutions are provided. Theoretical analysis on the test statistic distribution and threshold setting is also investigated.
A simple Cooperative diversity method based on network path selection Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However, most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this "best" relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M relay nodes is required, such as those proposed by Laneman and Wornell (2003). The simplicity of the technique allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability, and efficiency in future 4G wireless systems.
Using polynomial chaos to compute the influence of multiple random surfers in the PageRank model The PageRank equation computes the importance of pages in a web graph relative to a single random surfer with a constant teleportation coefficient. To be globally relevant, the teleportation coefficient should account for the influence of all users. Therefore, we correct the PageRank formulation by modeling the teleportation coefficient as a random variable distributed according to user behavior. With this correction, the PageRank values themselves become random. We present two methods to quantify the uncertainty in the random PageRank: a Monte Carlo sampling algorithm and an algorithm based the truncated polynomial chaos expansion of the random quantities. With each of these methods, we compute the expectation and standard deviation of the PageRanks. Our statistical analysis shows that the standard deviation of the PageRanks are uncorrelated with the PageRank vector.
Practical RDF schema reasoning with annotated semantic web data Semantic Web data with annotations is becoming available, being YAGO knowledge base a prominent example. In this paper we present an approach to perform the closure of large RDF Schema annotated semantic web data using standard database technology. In particular, we exploit several alternatives to address the problem of computing transitive closure with real fuzzy semantic data extracted from YAGO in the PostgreSQL database management system. We benchmark the several alternatives and compare to classical RDF Schema reasoning, providing the first implementation of annotated RDF schema in persistent storage.
Heden's bound on maximal partial spreads We prove Heden's result that the deficiency δ of a maximal partial spread in PG(3, q ) is greater than 1 + ½ (1+√5)√ q unless δ−1 is a multiple of p , where q=p n . When q is odd and not a square, we are able to improve this lower bound to roughly √3 q .
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2112
0.2112
0.2112
0.2112
0.1056
0
0
0
0
0
0
0
0
0
Adaptive Multiview Video Delivery Using Hybrid Networking. Multiview entertainment is the next step in 3D immersive media networking owing to its improved depth perception and free-viewpoint viewing capability whereby users can observe the scene from the desired viewpoint. This paper outlines a delivery system for multiview plus depth video, combining the broadcast and broadband networks. The digital video broadcast (DVB) network is used along with adapti...
An example of real time QoE IPTV service estimator This paper will consider an estimator which includes mathematical modelling of physical channel parameters as information carrier and the weakest links in the telecommunication chain of information transfer. It will also identify necessary physical layer parameters which influence the quality of multimedia service delivery or QoE (Quality of Experience). With the modelling of the above mentioned parameters, the relation between degradations will be defined which appear in the channel between the user and the central telecommunication equipment with domination of one media used for information transfer with certain error probability. Degradations in a physical channel can be noticed by observing the change in values of channel transfer function or the appearance of increased noise. Estimation of QoE IPTV (Internet Protocol Television) service is especially necessary during delivery of real time service. In that case the mentioned degradations may appear in any moment and cause a packet loss.
The Impact Of Interactivity On The Qoe: A Preliminary Analysis The interactivity in multimedia services concerns the input/output process of the user with the system, as well as its cooperativity. It is an important element that affects the overall Quality of Experience (QoE), which may even mask the impact of the quality level of the (audio and visual) signal itself on the overall user perception. This work is a preliminary study aimed at evaluating the weight of the interactivity, which relies on subjective assessments that have been conducted varying the artefacts, genre and interactivity features on video streaming services evaluated by the subjects. Subjective evaluations have been collected from 25 subjects in compliance with ITU-T Recommendation P. 910 through single-stimulus Absolute Category Rating (ACR). It resulted that the impact of the interactivity is influenced by the presence of other components, such as presence of buffer starvations and type of content displayed. An objective quality metric able to measure the influence of the interactivity on the QoE has also been defined, which has proved to be highly correlated with subjective results. We concluded that the interactivity feature can be successfully represented by either an additive or a multiplicative component to be added in existing quality metrics.
QoE Evaluation of Multimedia Services Based on Audiovisual Quality and User Interest. Quality of experience (QoE) has significant influence on whether or not a user will choose a service or product in the competitive era. For multimedia services, there are various factors in a communication ecosystem working together on users, which stimulate their different senses inducing multidimensional perceptions of the services, and inevitably increase the difficulty in measurement and estim...
Is QoE estimation based on QoS parameters sufficient for video quality assessment? Internet Service providers offer today a variety of of audio, video and data services. Traditional approaches for quality assessment of video services were based on Quality of Service (QoS) measurement. These measurements are considered as performance measurement at the network level. However, in order to make accurate quality assessment, the video must be assessed subjectively by the user. However, QoS parameters are easier to be obtained than the QoE subjective scores. Therefore, some recent works have investigated objective approaches to estimate QoE scores based on measured QoS parameters. The main purpose is the control of QoE based on QoS measurements. This paper presents several solutions and models presented in the literature. We discuss some other factors that must be considered in the mapping process between QoS and QoE. The impact of these factors on perceived QoE is verified through subjective tests.
Can Context Monitoring Improve Qoe? A Case Study Of Video Flash Crowds In The Internet Of Services Over the last decade or so, significant research has focused on defining Quality of Experience (QoE) of Multimedia Systems and identifying the key factors that collectively determine it. Some consensus thus exists as to the role of System Factors, Human Factors and Context Factors. In this paper, the notion of context is broadened to include information gleaned from simultaneous out-of-band channels, such as social network trend analytics, that can be used if interpreted in a timely manner, to help further optimise QoE. A case study involving simulation of HTTP adaptive streaming (HAS) and load balancing in a content distribution network (CDN) in a flash crowd scenario is presented with encouraging results.
Quality of experience for HTTP adaptive streaming services The growing consumer demand for mobile video services is one of the key drivers of the evolution of new wireless multimedia solutions requiring exploration of new ways to optimize future wireless networks for video services towards delivering enhanced quality of experience (QoE). One of these key video enhancing solutions is HTTP adaptive streaming (HAS), which has recently been spreading as a form of Internet video delivery and is expected to be deployed more broadly over the next few years. As a relatively new technology in comparison with traditional push-based adaptive streaming techniques, deployment of HAS presents new challenges and opportunities for content developers, service providers, network operators and device manufacturers. One of these important challenges is developing evaluation methodologies and performance metrics to accurately assess user QoE for HAS services, and effectively utilizing these metrics for service provisioning and optimizing network adaptation. In that vein, this article provides an overview of HAS concepts, and reviews the recently standardized QoE metrics and reporting framework in 3GPP. Furthermore, we present an end-to-end QoE evaluation study on HAS conducted over 3GPP LTE networks and conclude with a discussion of future challenges and opportunities in QoE optimization for HAS services.
QoE-Based Traffic and Resource Management for Adaptive HTTP Video Delivery in LTE There is a growing interest in over-the-top (OTT) dynamic adaptive streaming over Hypertext Transfer Protocol (HTTP) (DASH) services. In mobile DASH, a client controls the streaming rate and the base station in the mobile network decides on the resource allocation. Different from the majority of previous works that focus on client-based rate adaptation mechanisms, this paper investigates the mobile network potential for enhancing the user quality-of-experience (QoE) in multiuser OTT DASH. Specifically, we first present proactive and reactive QoE optimization approaches for adapting the adaptive HTTP video delivery in an long-term evolution network. We then show, using subjective experiments, that by taking a proactive role in determining the transmission and streaming rates, the network operator can provide a better video quality and a fairer QoE across the streaming users. Furthermore, we consider the playout buffer time of the clients and propose a novel playout buffer-dependent approach that determines for each client the streaming rate for future video segments according to its buffer time and the achievable QoE under current radio conditions. In addition, we show that by jointly solving for the streaming and transmission rates, the wireless network resources are more efficiently allocated among the users and substantial gains in the user perceived video quality can be achieved.
A quest for an Internet video quality-of-experience metric An imminent challenge that content providers, CDNs, third-party analytics and optimization services, and video player designers in the Internet video ecosystem face is the lack of a single "gold standard" to evaluate different competing solutions. Existing techniques that describe the quality of the encoded signal or controlled studies to measure opinion scores do not translate directly into user experience at scale. Recent work shows that measurable performance metrics such as buffering, startup time, bitrate, and number of bitrate switches impact user experience. However, converting these observations into a quantitative quality-of-experience metric turns out to be challenging since these metrics are interrelated in complex and sometimes counter-intuitive ways, and their relationship to user experience can be unpredictable. To further complicate things, many confounding factors are introduced by the nature of the content itself (e.g., user interest, genre). We believe that the issue of interdependency can be addressed by casting this as a machine learning problem to build a suitable predictive model from empirical observations. We also show that setting up the problem based on domain-specific and measurement-driven insights can minimize the impact of the various confounding factors to improve the prediction performance.
MULTILEVEL QUADRATURE FOR ELLIPTIC PARAMETRIC PARTIAL DIFFERENTIAL EQUATIONS IN CASE OF POLYGONAL APPROXIMATIONS OF CURVED DOMAINS Multilevel quadrature methods for parametric operator equations such as the multilevel (quasi-) Monte Carlo method resemble a sparse tensor product approximation between the spatial variable and the parameter. We employ this fact to reverse the multilevel quadrature method by applying differences of quadrature rules to finite element discretizations of increasing resolution. Besides being algorithmically more efficient if the underlying quadrature rules are nested, this way of performing the sparse tensor product approximation enables the easy use of nonnested and even adaptively refined finite element meshes. We moreover provide a rigorous error and regularity analysis addressing the variational crimes of using polygonal approximations of curved domains and numerical quadrature of the bilinear form. Our results facilitate the construction of efficient multilevel quadrature methods based on deterministic high order quadrature rules for the stochastic parameter. Numerical results in three spatial dimensions are provided to illustrate the approach.
Model-Based Compressive Sensing Compressive sensing (CS) is an alternative to Shannon/Nyquist sampling for the acquisition of sparse or compressible signals that can be well approximated by just K ¿ N elements from an N -dimensional basis. Instead of taking periodic samples, CS measures inner products with M < N random vectors and then recovers the signal via a sparsity-seeking optimization or greedy algorithm. Standard CS dictates that robust signal recovery is possible from M = O(K log(N/K)) measurements. It is possible to substantially decrease M without sacrificing robustness by leveraging more realistic signal models that go beyond simple sparsity and compressibility by including structural dependencies between the values and locations of the signal coefficients. This paper introduces a model-based CS theory that parallels the conventional theory and provides concrete guidelines on how to create model-based recovery algorithms with provable performance guarantees. A highlight is the introduction of a new class of structured compressible signals along with a new sufficient condition for robust structured compressible signal recovery that we dub the restricted amplification property, which is the natural counterpart to the restricted isometry property of conventional CS. Two examples integrate two relevant signal models-wavelet trees and block sparsity-into two state-of-the-art CS recovery algorithms and prove that they offer robust recovery from just M = O(K) measurements. Extensive numerical simulations demonstrate the validity and applicability of our new theory and algorithms.
Bayesian inference with optimal maps We present a new approach to Bayesian inference that entirely avoids Markov chain simulation, by constructing a map that pushes forward the prior measure to the posterior measure. Existence and uniqueness of a suitable measure-preserving map is established by formulating the problem in the context of optimal transport theory. We discuss various means of explicitly parameterizing the map and computing it efficiently through solution of an optimization problem, exploiting gradient information from the forward model when possible. The resulting algorithm overcomes many of the computational bottlenecks associated with Markov chain Monte Carlo. Advantages of a map-based representation of the posterior include analytical expressions for posterior moments and the ability to generate arbitrary numbers of independent posterior samples without additional likelihood evaluations or forward solves. The optimization approach also provides clear convergence criteria for posterior approximation and facilitates model selection through automatic evaluation of the marginal likelihood. We demonstrate the accuracy and efficiency of the approach on nonlinear inverse problems of varying dimension, involving the inference of parameters appearing in ordinary and partial differential equations.
Sparse Algorithms are not Stable: A No-free-lunch Theorem. We consider two desired properties of learning algorithms: *sparsity* and *algorithmic stability*. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: a sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that $\ell_1$-regularized regression (Lasso) cannot be stable, while $\ell_2$-regularized regression is known to have strong stability properties and is therefore not sparse.
Type 2 fuzzy neural networks: an interpretation based on fuzzy inference neural networks with fuzzy parameters It is shown, in this paper, that the NEFCON, NEFCLASS, and NEFPROX systems can be viewed as equivalent to the RBF-like neuro-fuzzy systems. In addition, they can be considered as type 2 networks. Analogously to these systems, a concept of type 2 fuzzy neural networks is proposed
1.10204
0.10408
0.10408
0.05204
0.035027
0.013835
0.002387
0.00044
0.00005
0
0
0
0
0
General formulation of formal grammars By extracting the basic properties common to the formal grammars appeared in existing literatures, we develop a general formulation of formal grammars. We define a pseudo grammar and derive from it the well-known probabilistic, fuzzy grammars and so on. Moreover, several interesting grammars such as ⊔∗ grammars, ⊔ ⊓ grammars, ⊔ ⊓ grammars, composite B-fuzzy grammars, and mixed fuzzy grammars, which have never appeared in any other papers before, are derived.
The Vienna Definition Language
Artificial Paranoia
Equational Languages
Fuzzy Algorithms
A Definition of a Nonprobabilistic Entropy in the Setting of Fuzzy Sets Theory
Dynamic system modeling using a recurrent interval-valued fuzzy neural network and its hardware implementation This paper first proposes a new recurrent interval-valued fuzzy neural network (RIFNN) for dynamic system modeling. A new hardware implementation technique for the RIFNN using a field-programmable gate array (FPGA) chip is then proposed. The antecedent and consequent parts in an RIFNN use interval-valued fuzzy sets in order to increase the network noise resistance ability. A new recurrent structure is proposed in RIFNN, with the recurrent loops enabling it to handle dynamic system processing problems. An RIFNN is constructed from structure and parameter learning. For hardware implementation of the RIFNN, the pipeline technique and a new circuit for type-reduction operation are proposed to improve the chip performance. Simulations and comparisons with various feedforward and recurrent fuzzy neural networks verify the performance of the RIFNN under noisy conditions.
Development of a type-2 fuzzy proportional controller Studies have shown that PID controllers can be realized by type-1 (conventional) fuzzy logic systems (FLSs). However, the input-output mappings of such fuzzy PID controllers are fixed. The control performance would, therefore, vary if the system parameters are uncertain. This paper aims at developing a type-2 FLS to control a process whose parameters are uncertain. A method for designing type-2 triangular membership functions with the desired generalized centroid is first proposed. By using this type-2 fuzzy set to partition the output domain, a type-2 fuzzy proportional controller is obtained. It is shown that the type-2 fuzzy logic system is equivalent to a proportional controller that may assume a range of gains. Simulation results are presented to demonstrate that the performance of the proposed controller can be maintained even when the system parameters deviate from their nominal values.
A hybrid multi-criteria decision-making model for firms competence evaluation In this paper, we present a hybrid multi-criteria decision-making (MCDM) model to evaluate the competence of the firms. According to the competence-based theory reveals that firm competencies are recognized from exclusive and unique capabilities that each firm enjoy in marketplace and are tightly intertwined within different business functions throughout the company. Therefore, competence in the firm is a composite of various attributes. Among them many intangible and tangible attributes are difficult to measure. In order to overcome the issue, we invite fuzzy set theory into the measurement of performance. In this paper first we calculate the weight of each criterion through adaptive analytic hierarchy process (AHP) approach (A^3) method, and then we appraise the performance of firms via linguistic variables which are expressed as trapezoidal fuzzy numbers. In the next step we transform these fuzzy numbers into interval data by means of @a-cut. Then considering different values for @a we rank the firms through TOPSIS method with interval data. Since there are different ranks for different @a values, we apply linear assignment method to obtain final rank for alternatives.
Fuzzy decision making with immediate probabilities We developed a new decision-making model with probabilistic information and used the concept of the immediate probability to aggregate the information. This type of probability modifies the objective probability by introducing the attitudinal character of the decision maker. In doing so, we use the ordered weighting average (OWA) operator. When using this model, it is assumed that the information is given by exact numbers. However, this may not be the real situation found within the decision-making problem. Sometimes, the information is vague or imprecise and it is necessary to use another approach to assess the information, such as the use of fuzzy numbers. Then, the decision-making problem can be represented more completely because we now consider the best and worst possible scenarios, along with the possibility that some intermediate event (an internal value) will occur. We will use the fuzzy ordered weighted averaging (FOWA) operator to aggregate the information with the probabilities. As a result, we will get the Immediate Probability-FOWA (IP-FOWA) operator. We will study some of its main properties. We will apply the new approach in a decision-making problem about selection of strategies.
The algebra of fuzzy truth values The purpose of this paper is to give a straightforward mathematical treatment of algebras of fuzzy truth values for type-2 fuzzy sets.
Sparse Reconstruction by Separable Approximation Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.
A fuzzy logic system for the detection and recognition of handwritten street numbers Fuzzy logic is applied to the problem of locating and reading street numbers in digital images of handwritten mail. A fuzzy rule-based system is defined that uses uncertain information provided by image processing and neural network-based character recognition modules to generate multiple hypotheses with associated confidence values for the location of the street number in an image of a handwritten address. The results of a blind test of the resultant system are presented to demonstrate the value of this new approach. The results are compared to those obtained using a neural network trained with backpropagation. The fuzzy logic system achieved higher performance rates
A possibilistic approach to the modeling and resolution of uncertain closed-loop logistics Closed-loop logistics planning is an important tactic for the achievement of sustainable development. However, the correlation among the demand, recovery, and landfilling makes the estimation of their rates uncertain and difficult. Although the fuzzy numbers can present such kinds of overlapping phenomena, the conventional method of defuzzification using level-cut methods could result in the loss of information. To retain complete information, the possibilistic approach is adopted to obtain the possibilistic mean and mean square imprecision index (MSII) of the shortage and surplus for uncertain factors. By applying the possibilistic approach, a multi-objective, closed-loop logistics model considering shortage and surplus is formulated. The two objectives are to reduce both the total cost and the root MSII. Then, a non-dominated solution can be obtained to support decisions with lower perturbation and cost. Also, the information on prediction interval can be obtained from the possibilistic mean and root MSII to support the decisions in the uncertain environment. This problem is non-deterministic polynomial-time hard, so a new algorithm based on the spanning tree-based genetic algorithm has been developed. Numerical experiments have shown that the proposed algorithm can yield comparatively efficient and accurate results.
1.200022
0.200022
0.200022
0.200022
0.066689
0.006263
0.000033
0.000026
0.000023
0.000019
0.000014
0
0
0
A type-2 fuzzy logic controller design for buck and boost DC–DC converters Conventional (type-1) fuzzy logic controllers have been commonly used in various power converter applications. Generally, in these controllers, the experience and knowledge of human experts are needed to decide parameters associated with the rule base and membership functions. The rule base and the membership function parameters may often mean different things to different experts. This may cause rule uncertainty problems. Consequently, the performance of the controlled system, which is controlled with type-1 fuzzy logic controller, is undesirably affected. In this study, a type-2 fuzzy logic controller is proposed for the control of buck and boost DC–DC converters. To examine and analysis the effects of the proposed controller on the system performance, both converters are also controlled using the PI controller and conventional fuzzy logic controller. The settling time, the overshoot, the steady state error and the transient response of the converters under the load and input voltage changes are used as the performance criteria for the evaluation of the controller performance. Simulation results show that buck and boost converters controlled by type-2 fuzzy logic controller have better performance than the buck and boost converters controlled by type-1 fuzzy logic controller and PI controller.
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System. A novel indirect adaptive backstepping control approach based on type-2 fuzzy system is developed for a class of nonlinear systems. This approach adopts type-2 fuzzy system instead of type-1 fuzzy system to approximate the unknown functions. With type-reduction, the type-2 fuzzy system is replaced by the average of two type-1 fuzzy systems. Ultimately, the adaptive laws, by means of backstepping design technique, will be developed to adjust the parameters to attenuate the approximation error and external disturbance. According to stability theorem, it is proved that the proposed Type-2 Adaptive Backstepping Fuzzy Control (T2ABFC) approach can guarantee global stability of closed-loop system and ensure all the signals bounded. Compared with existing Type-1 Adaptive Backstepping Fuzzy Control (T1ABFC), as the advantages of handling numerical and linguistic uncertainties, T2ABFC has the potential to produce better performances in many respects, such as stability and resistance to disturbances. Finally, a biological simulation example is provided to illustrate the feasibility of control scheme proposed in this paper.
Type-2 fuzzy control for a flexible-joint robot using voltage control strategy Type-1 fuzzy sets cannot fully handle the uncertainties. To overcome the problem, type-2 fuzzy sets have been proposed. The novelty of this paper is using interval type-2 fuzzy logic controller (IT2FLC) to control a flexible-joint robot with voltage control strategy. In order to take into account the whole robotic system including the dynamics of actuators and the robot manipulator, the voltages of motors are used as inputs of the system. To highlight the capabilities of the control system, a flexible joint robot which is highly nonlinear, heavily coupled and uncertain is used. In addition, to improve the control performance, the parameters of the primary membership functions of IT2FLC are optimized using particle swarm optimization (PSO). A comparative study between the proposed IT2FLC and type-1 fuzzy logic controller (T1FLC) is presented to better assess their respective performance in presence of external disturbance and unmodelled dynamics. Stability analysis is presented and the effectiveness of the proposed control approach is demonstrated by simulations using a two-link flexible-joint robot driven by permanent magnet direct current motors. Simulation results show the superiority of the IT2FLC over the T1FLC in terms of accuracy, robustness and interpretability.
A type-2 fuzzy wavelet neural network for system identification and control This paper proposes a novel, type-2 fuzzy wavelet neural network (type-2 FWNN) structure that combines the advantages of type-2 fuzzy systems and wavelet neural networks for identification and control of nonlinear uncertain systems. The proposed network is constructed on the base of a set of fuzzy rules that includes type-2 fuzzy sets in the antecedent part and wavelet functions in the consequent part. For structure identification, a fuzzy clustering algorithm is implemented to generate the rules automatically and for parameter identification the gradient learning algorithm is used. The effectiveness of the proposed system is evaluated for identification and control problems of time-invariant and time-varying systems. The results obtained are compared with those obtained by the use of type-1 FWNN based systems and other similar studies.
Control of the biodegradation of mixed wastes in a continuous bioreactor by a type-2 fuzzy logic controller Type-2 fuzzy logic control is proposed for nonlinear processes characterized by bifurcations. A control simulation study was conducted for a bioreactor with cell recycle containing phenol and glucose as carbon and energy sources in which a pure culture of Pseudomonas putida is carried out. The model developed by Ajbar [Ajbar, A. (2001). Stability analysis of the biodegradation of mixed wastes in a continuous bioreactor with cell recycle. Water Research, 35(5), 1201–1208] was used for the simulations.
Adaptive noise cancellation using type-2 fuzzy logic and neural networks. We describe in this paper the use of type-2 fuzzy logic for achieving adaptive noise cancellation. The objective of adaptive noise cancellation is to filter out an interference component by identifying a model between a measurable noise source and the corresponding un-measurable interference. We propose the use of type-2 fuzzy logic to find this model. The use of type-2 fuzzy logic is justified due to the high level of uncertainty of the process, which makes difficult to find appropriate parameter values for the membership functions.
Discrete Interval Type 2 Fuzzy System Models Using Uncertainty in Learning Parameters Fuzzy system modeling (FSM) is one of the most prominent tools that can be used to identify the behavior of highly nonlinear systems with uncertainty. Conventional FSM techniques utilize type 1 fuzzy sets in order to capture the uncertainty in the system. However, since type 1 fuzzy sets express the belongingness of a crisp value x' of a base variable x in a fuzzy set A by a crisp membership value muA(x'), they cannot fully capture the uncertainties due to imprecision in identifying membership functions. Higher types of fuzzy sets can be a remedy to address this issue. Since, the computational complexity of operations on fuzzy sets are increasing with the increasing type of the fuzzy set, the use of type 2 fuzzy sets and linguistic logical connectives drew a considerable amount of attention in the realm of fuzzy system modeling in the last two decades. In this paper, we propose a black-box methodology that can identify robust type 2 Takagi-Sugeno, Mizumoto and Linguistic fuzzy system models with high predictive power. One of the essential problems of type 2 fuzzy system models is computational complexity. In order to remedy this problem, discrete interval valued type 2 fuzzy system models are proposed with type reduction. In the proposed fuzzy system modeling methods, fuzzy C-means (FCM) clustering algorithm is used in order to identify the system structure. The proposed discrete interval valued type 2 fuzzy system models are generated by a learning parameter of FCM, known as the level of membership, and its variation over a specific set of values which generate the uncertainty associated with the system structure
A new hybrid artificial neural networks and fuzzy regression model for time series forecasting Quantitative methods have nowadays become very important tools for forecasting purposes in financial markets as for improved decisions and investments. Forecasting accuracy is one of the most important factors involved in selecting a forecasting method; hence, never has research directed at improving upon the effectiveness of time series models stopped. Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of forecasting problems with a high degree of accuracy. However, ANNs need a large amount of historical data in order to yield accurate results. In a real world situation and in financial markets specifically, the environment is full of uncertainties and changes occur rapidly; thus, future situations must be usually forecasted using the scant data made available over a short span of time. Therefore, forecasting in these situations requires methods that work efficiently with incomplete data. Although fuzzy forecasting methods are suitable for incomplete data situations, their performance is not always satisfactory. In this paper, based on the basic concepts of ANNs and fuzzy regression models, a new hybrid method is proposed that yields more accurate results with incomplete data sets. In our proposed model, the advantages of ANNs and fuzzy regression are combined to overcome the limitations in both ANNs and fuzzy regression models. The empirical results of financial market forecasting indicate that the proposed model can be an effective way of improving forecasting accuracy.
Type-Reduction Of General Type-2 Fuzzy Sets: The Type-1 Owa Approach For general type-2 fuzzy sets, the defuzzification process is very complex and the exhaustive direct method of implementing type-reduction is computationally expensive and turns out to be impractical. This has inevitably hindered the development of type-2 fuzzy inferencing systems in real-world applications. The present situation will not be expected to change, unless an efficient and fast method of deffuzzifying general type-2 fuzzy sets emerges. Type-1 ordered weighted averaging (OWA) operators have been proposed to aggregate expert uncertain knowledge expressed by type-1 fuzzy sets in decision making. In particular, the recently developed alpha-level approach to type-1 OWA operations has proven to be an effective tool for aggregating uncertain information with uncertain weights in real-time applications because its complexity is of linear order. In this paper, we prove that the mathematical representation of the type-reduced set (TRS) of a general type-2 fuzzy set is equivalent to that of a special case of type-1 OWA operator. This relationship opens up a new way of performing type reduction of general type-2 fuzzy sets, allowing the use of the alpha-level approach to type-1 OWA operations to compute the TRS of a general type-2 fuzzy set. As a result, a fast and efficient method of computing the centroid of general type-2 fuzzy sets is realized. The experimental results presented here illustrate the effectiveness of this method in conducting type reduction of different general type-2 fuzzy sets.
Logical structure of fuzzy IF-THEN rules This paper provides a logical basis for manipulation with fuzzy IF-THEN rules. Our theory is wide enough and it encompasses not only finding a conclusion by means of the compositional rule of inference due to Lotfi A. Zadeh but also other kinds of approximate reasoning methods, e.g., perception-based deduction, provided that there exists a possibility to characterize them within a formal logical system. In contrast with other approaches employing variants of multiple-valued first-order logic, the approach presented here employs fuzzy type theory of V. Novák which has sufficient expressive power to present the essential concepts and results in a compact, elegant and justifiable form. Within the effectively formalized representation developed here, based on a complete logical system, it is possible to reconstruct numerous well-known properties of CRI-related fuzzy inference methods, albeit not from the analytic point of view as usually presented, but as formal derivations of the logical system employed. The authors are confident that eventually all relevant knowledge about fuzzy inference methods based on fuzzy IF-THEN rule bases will be represented, formalized and backed up by proof within the well-founded logical representation presented here. An immediate positive consequence of this approach is that suddenly all elements of a fuzzy inference method based on fuzzy IF-THEN rules are ‘first class citizens´ of the representation: there are clear, logically founded definitions for fuzzy IF-THEN rule bases to be consistent, complete, or independent.
Noun Phrase Coreference as Clustering This paper introduces a new, unsupervised algo- rithm for noun phrase coreference resolution. It dif- fers from existing methods in that it views corer- erence resolution as a clustering task. In an eval- uation on the MUC-6 coreference resolution cor- pus, the algorithm achieves an F-measure of 53.6%~ placing it firmly between the worst (40%) and best (65%) systems in the MUC-6 evaluation. More im- portantly, the clustering approach outperforms the only MUC-6 system to treat coreference resolution as a learning problem. The clustering algorithm ap- pears to provide a flexible mechanism for coordi- nating the application of context-independent and context-dependent constraints and preferences for accurate partitioning of noun phrases into corefer- ence equivalence classes.
Reduction about approximation spaces of covering generalized rough sets The introduction of covering generalized rough sets has made a substantial contribution to the traditional theory of rough sets. The notion of attribute reduction can be regarded as one of the strongest and most significant results in rough sets. However, the efforts made on attribute reduction of covering generalized rough sets are far from sufficient. In this work, covering reduction is examined and discussed. We initially construct a new reduction theory by redefining the approximation spaces and the reducts of covering generalized rough sets. This theory is applicable to all types of covering generalized rough sets, and generalizes some existing reduction theories. Moreover, the currently insufficient reducts of covering generalized rough sets are improved by the new reduction. We then investigate in detail the procedures to get reducts of a covering. The reduction of a covering also provides a technique for data reduction in data mining.
On Ryser's conjecture. Motivated by an old problem known as Ryser's Conjecture, we prove that for r = 4 and r = 5, there exists epsilon > 0 such that every r-partite r-uniform hypergraph H has a cover of size at most (r - epsilon)nu(H), where nu(H) denotes the size of a largest matching in H.
Fuzzy management of user actions during hypermedia navigation The recent dramatic advances in the field of multimedia systems has made pacticable the development of an Intelligent Tutoring Multimedia (ITM). In these systems are present hypertextual structures that belongs to the class hypermedia systems. ITM development involves the definition of a suitable navigation model in addition to the other modules of an Intelligent Tutoring System (ITS), i.e. Database module, User module, Interface module, Teaching module. The navigation module receives as inputs the state of the system and the user's current assessment and tries to optimize the fruition of the knowledge base. Moreover, this module is responsible for managing the effects of disorientation and cognitive overhead. In this paper we deal essentially with four topics: 1.(i) to define a fuzzy-based user model able to manage adequately the user's cognitive state, the orientation, and the cognitive overhead;2.(ii) to introduce fuzzy tools within the navigation module in order to carry out moves on the grounds of meaningful data;3.(iii) to define a set of functions that can dynamically infer new states concerning user's interests;4.(iv) to classify the hypermedia actions according to their semantics.
1.068802
0.068333
0.068333
0.068333
0.033333
0.011144
0.005331
0.000667
0.000062
0
0
0
0
0
The LTOPSIS: An alternative to TOPSIS decision-making approach for linguistic variables This paper develops an evaluation approach based on the Technique for Order Performance by Similarity to Ideal Solution (TOPSIS). When the input for a decision process is linguistic, it can be understood that the output should also be linguistic. For that reason, in this paper we propose a modification of the TOPSIS algorithm which develops the above idea and which can also be used as a linguistic classifier. In this new development, modifications to the classic algorithm have been considered which enable linguistic outputs and which can be checked through the inclusion of an applied example to demonstrate the goodness of the new model proposed.
A Linguistic Approach to Structural Analysis in Prospective Studies.
A Linguistic Screening Evaluation Model in New Product Development. The screening of new product ideas is critically very important in new product development (NPD). Due to the incompleteness of information available and the qualitative nature of most evaluation criteria regarding NPD process, a fuzzy linguistic approach may be necessary for new-product screening, making use of linguistic assessments and the fuzzy-set-based computation. However, an inherent limita...
An evaluation of airline service quality using the fuzzy weighted SERVQUAL method The airline service quality is an important issue in the international air travel transportation industry. Although a number of studies focus on the subject of airline service quality evaluation in the past, most of these studies applied the SERVQUAL method to evaluate the airline service quality. But only few have attempted to evaluate the airline service quality using the weighted SERVQUAL method. Furthermore, human judgments are often vague and it is not easy for passengers to express the weights of evaluation criteria and the satisfaction of airline service quality using an exact numerical value. It is more realistic to use linguistic terms to describe the expectation value, perception value and important weight of evaluation criteria. Due to this type of existing fuzziness in the airline service quality evaluation, fuzzy set theory is an appropriate method for dealing with uncertainty. The subjective evaluation data can be more adequately expressed in linguistic variables. Thus this article attempts to fill this gap in the current literature by establishing a fuzzy weighted SERVQUAL model for evaluating the airline service quality. A case study of Taiwanese airline is conduced to demonstrate the effectiveness of the fuzzy weighted SERVQUAL model. Finally, some interesting conclusions and useful suggestions are given to airlines to improve the service quality.
The concept of a linguistic variable and its application to approximate reasoning-III By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, etc., rather than 20, 21, 22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (&?, T(z), U, G,M) in which &? is the name of the variable; T(s) is the term-set of2 , that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c : l/ + (0, I), which associates with each u in U its compati- bility with X. Thus, the COItIpdtibiiity of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compdtibihties of the so- called primary terms in a composite linguistic value-e.g.,.young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectivesand and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The
Statistical timing based on incomplete probabilistic descriptions of parameter uncertainty Existing approaches to timing analysis under uncertainty are based on restrictive assumptions. Statistical STA techniques assume that the full probabilistic distribution of parameter uncertainty is available; in reality, the complete probabilistic description often cannot be obtained. In this paper, a new paradigm for parameter uncertainty description is proposed as a way to consistently and rigorously handle partially available descriptions of parameter uncertainty. The paradigm is based on a theory of interval probabilistic models that permit handling uncertainty that is described in a distribution-free mode - just via the range, the mean, and the variance. This permits effectively handling multiple real-life challenges, including imprecise and limited information about the distributions of process parameters, parameters coming from different populations, and the sources of uncertainty that are too difficult to handle via full probabilistic measures (e.g. on-chip supply voltage variation). Specifically, analytical techniques for bounding the distributions of probabilistic interval variables are proposed. Besides, a provably correct strategy for fast Monte Carlo simulation based on probabilistic interval variables is introduced. A path-based timing algorithm implementing the novel modeling paradigm, as well as handling the traditional variability descriptions, has been developed. The results indicate the proposed algorithm can improve the upper bound of the 90(th)-percentile circuit delay, on average, by 5.3% across the ISCAS'85 benchmark circuits, compared to the worst-case timing estimates that use only the interval information of the partially specified parameters.
Fuzzy set methods for qualitative and natural language oriented simulation The author discusses the approach of using fuzzy set theory to create a formal way of viewing the qualitative simulation of models whose states, inputs, outputs, and parameters are uncertain. Simulation was performed using detailed and accurate models, and it was shown how input and output trajectories could reflect linguistic (or qualitative) changes in a system. Uncertain variables are encoded using triangular fuzzy numbers, and three distinct fuzzy simulation approaches (Monte Carlo, correlated and uncorrelated) are defined. The methods discussed are also valid for discrete event simulation; experiments have been performed on the fuzzy simulation of a single server queuing model. In addition, an existing C-based simulation toolkit, SimPack, was augmented to include the capabilities for modeling using fuzzy arithmetic and linguistic association, and a C++ class definition was coded for fuzzy number types
A training algorithm for optimal margin classifiers A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
A Bayesian approach to image expansion for improved definition. Accurate image expansion is important in many areas of image analysis. Common methods of expansion, such as linear and spline techniques, tend to smooth the image data at edge regions. This paper introduces a method for nonlinear image expansion which preserves the discontinuities of the original image, producing an expanded image with improved definition. The maximum a posteriori (MAP) estimation techniques that are proposed for noise-free and noisy images result in the optimization of convex functionals. The expanded images produced from these methods will be shown to be aesthetically and quantitatively superior to images expanded by the standard methods of replication, linear interpolation, and cubic B-spline expansion.
A simple Cooperative diversity method based on network path selection Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However, most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this "best" relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M relay nodes is required, such as those proposed by Laneman and Wornell (2003). The simplicity of the technique allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability, and efficiency in future 4G wireless systems.
Using polynomial chaos to compute the influence of multiple random surfers in the PageRank model The PageRank equation computes the importance of pages in a web graph relative to a single random surfer with a constant teleportation coefficient. To be globally relevant, the teleportation coefficient should account for the influence of all users. Therefore, we correct the PageRank formulation by modeling the teleportation coefficient as a random variable distributed according to user behavior. With this correction, the PageRank values themselves become random. We present two methods to quantify the uncertainty in the random PageRank: a Monte Carlo sampling algorithm and an algorithm based the truncated polynomial chaos expansion of the random quantities. With each of these methods, we compute the expectation and standard deviation of the PageRanks. Our statistical analysis shows that the standard deviation of the PageRanks are uncorrelated with the PageRank vector.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Fuzzy Concepts and Formal Methods: A Fuzzy Logic Toolkit for Z It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. We provide a summary of a toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations. We also provide some examples of the laws which establishes an isomorphism between the extended notation presented here and conventional Z when applied to boolean sets and relations.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2
0.2
0.1
0.05
0.000075
0
0
0
0
0
0
0
0
0
Analysis Of Packet Loss For Compressed Video: Does Burst-Length Matter? Video communication is often afflicted by various forms of losses' such as packet loss over the Internet. This paper examines the question of whether the packet loss pattern, and in particular the burst length, is important for accurately estimating the expected mean-squared error distortion. Specifically, we (1) verify that the loss pattern does have a significant effect on the resulting distortion, (2) explain why a loss pattern, for example a burst loss, generally produces a larger distortion than an equal number of isolated losses, and (3) propose a model that accurately estimates the expected distortion by explicitly accounting for the loss pattern, inter-frame error propagation, and the correlation between error frames. The accuracy of the proposed model is validated with JVT/H.26L coded video and previous frame concealment, where for most sequences the total distortion is predicted to within +/-0.3 dB for burst loss of length two packets, as compared to prior models which underestimate the distortion by about 1.5 dB. Furthermore, as the burst length increases, our prediction is within +/-0.7 dB, while prior models degrade and underestimate the distortion by over 3 d13.
Resilient Peer-to-Peer Streaming We consider the problem of distributing "live" streaming media content to a potentially large and highly dynamic population of hosts. Peer-to-peer content distribution is attractive in this setting because the bandwidth available to serve content scales with demand. A key challenge, however, is making content distribution robust to peer transience. Our approach to providing robustness is to introduce redundancy, both in network paths and in data. We use multiple, diverse distribution trees to provide redundancy in network paths and multiple description coding (MDC) to provide redundancy in data.We present a simple tree management algorithm that provides the necessary path diversity and describe an adaptation framework for MDC based on scalable receiver feedback. We evaluate these using MDC applied to real video data coupled with real usage traces from a major news site that experienced a large flash crowd for live streaming content. Our results show very significant benefits in using multiple distribution trees and MDC, with a 22 dB improvement in PSNR in some cases.
Content-Aware P2p Video Streaming With Low Latency This paper describes the Stanford P2P Multicast (SPPM) streaming system that employs an overlay architecture specifically designed for low delay video applications. In order to provide interactivity to the user, this system has to keep the end-to-end delay as small as possible while guaranteeing a high video quality. A set of complimentary multicast trees is maintained to efficiently relay video traffic and a Congestion-Distortion Optimized (CoDiO) scheduler prioritizes more important video packets. Local retransmission is employed to mitigate packet loss. Real-time experiments performed on the Planet-Lab show the effectiveness of the system and the benefits of a content-aware scheduler in case of congestion or node failures.
Visibility Of Individual Packet Losses In Mpeg-2 Video The ability of a human to visually detect whether a packet has been lost during the transport of compressed video depends heavily on the location of the packet loss and the content or the video. In this paper, we explore when humans can visually detect the error caused by individual packet losses. Using the results of a subjective test based on 1080 packet losses in 72 minutes of video, we design a classifier that uses objective factors extracted from the video to predict to visibility of each error. Our classifier achieves over 93% accuracy.
The bittorrent p2p file-sharing system: measurements and analysis Of the many P2P file-sharing prototypes in existence, BitTorrent is one of the few that has managed to attract millions of users. BitTorrent relies on other (global) components for file search, employs a moderator system to ensure the integrity of file data, and uses a bartering technique for downloading in order to prevent users from freeriding. In this paper we present a measurement study of BitTorrent in which we focus on four issues, viz. availability, integrity, flashcrowd handling, and download performance. The purpose of this paper is to aid in the understanding of a real P2P system that apparently has the right mechanisms to attract a large user community, to provide measurement data that may be useful in modeling P2P systems, and to identify design issues in such systems.
Real-Time System for Adaptive Video Streaming Based on SVC This paper presents the integration of scalable video coding (SVC) into a generic platform for multimedia adaptation. The platform provides a full MPEG-21 chain including server, adaptation nodes, and clients. An efficient adaptation framework using SVC and MPEG-21 digital item adaptation (DIA) is integrated and it is shown that SVC can seamlessly be adapted using DIA. For protection of packet losses in an error prone environment an unequal erasure protection scheme for SVC is provided. The platform includes a real-time SVC encoder capable of encoding CIF video with a QCIF base layer and fine grain scalable quality refinement at 12.5 fps on off-the-shelf high-end PCs. The reported quality degradation due to the optimization of the encoding algorithm is below 0.6 dB for the tested sequences.
High-quality video view interpolation using a layered representation The ability to interactively control viewpoint while watching a video is an exciting application of image-based rendering. The goal of our work is to render dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this paper, we show how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation.In our approach, we first use a novel color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a novel temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates.
Video quality evaluation in the cloud
Depth Reconstruction Filter and Down/Up Sampling for Depth Coding in 3-D Video A depth image represents three-dimensional (3-D) scene information and is commonly used for depth image-based rendering (DIBR) to support 3-D video and free-viewpoint video applications. The virtual view is generally rendered by the DIBR technique and its quality depends highly on the quality of depth image. Thus, efficient depth coding is crucial to realize the 3-D video system. In this letter, w...
Explicit cost bounds of algorithms for multivariate tensor product problems We study multivariate tensor product problems in the worst case and average casesettings. They are defined on functions of d variables. For arbitrary d, we provideexplicit upper bounds on the costs of algorithms which compute an &quot;-approximationto the solution. The cost bounds are of the form(c(d) + 2) fi 1`fi 2 + fi 3ln 1=&quot;d \Gamma 1" fi 4 (d\Gamma1) `1&quot;" fi 5:Here c(d) is the cost of one function evaluation (or one linear functional evaluation),and fi i "s do not...
Maximum degree and fractional matchings in uniform hypergraphs Let ℋ be a family ofr-subsets of a finite setX. SetD(ℋ)= $$\mathop {\max }\limits_{x \in X} $$ |{E:x∈E∈ℋ}|, (maximum degree). We say that ℋ is intersecting if for anyH,H′ ∈ ℋ we haveH ∩H′ ≠ 0. In this case, obviously,D(ℋ)≧|ℋ|/r. According to a well-known conjectureD(ℋ)≧|ℋ|/(r−1+1/r). We prove a slightly stronger result. Let ℋ be anr-uniform, intersecting hypergraph. Then either it is a projective plane of orderr−1, consequentlyD(ℋ)=|ℋ|/(r−1+1/r), orD(ℋ)≧|ℋ|/(r−1). This is a corollary to a more general theorem on not necessarily intersecting hypergraphs.
An efficient algorithm for statistical minimization of total power under timing yield constraints Power minimization under variability is formulated as a rigorous statistical robust optimization program with a guarantee of power and timing yields. Both power and timing metrics are treated probabilistically. Power reduction is performed by simultaneous sizing and dual threshold voltage assignment. An extremely fast run-time is achieved by casting the problem as a second-order conic problem and solving it using efficient interior-point optimization methods. When compared to the deterministic optimization, the new algorithm, on average, reduces static power by 31% and total power by 17% without the loss of parametric yield. The run time on a variety of public and industrial benchmarks is 30× faster than other known statistical power minimization algorithms.
Fuzzy prediction based on regression models We use a linguistic variable to represent imprecise information to be inserted in regression models used for prediction. We show how one can obtain probabilistic statements about the forecasted variables.
Mesh denoising via L0 minimization We present an algorithm for denoising triangulated models based on L0 minimization. Our method maximizes the flat regions of the model and gradually removes noise while preserving sharp features. As part of this process, we build a discrete differential operator for arbitrary triangle meshes that is robust with respect to degenerate triangulations. We compare our method versus other anisotropic denoising algorithms and demonstrate that our method is more robust and produces good results even in the presence of high noise.
1.039684
0.03131
0.025734
0.014591
0.002707
0.001176
0.000312
0.000094
0.000012
0
0
0
0
0
A group decision-making approach to uncertain quality function deployment based on fuzzy preference relation and fuzzy majority. Quality function deployment (QFD) is one of the very effective customer-driven quality system tools typically applied to fulfill customer needs or requirements (CRs). It is a crucial step in QFD to derive the prioritization of design requirements (DRs) from CRs for a product. However, effective prioritization of DRs is seriously challenged due to two types of uncertainties: human subjective perception and customer heterogeneity. This paper tries to propose a novel two-stage group decision-making approach to simultaneously address the two types of uncertainties underlying QFD. The first stage is to determine the fuzzy preference relations of different DRs with respect to each customer based on the order-based semantics of linguistic information. The second stage is to determine the prioritization of DRs by synthesizing all customers' fuzzy preference relations into an overall one by fuzzy majority. Two examples, a Chinese restaurant and a flexible manufacturing system, are used to illustrate the proposed approach. The restaurant example is also used to compare with three existing approaches. Implementation results show that the proposed approach can eliminate the burden of quantifying qualitative concepts and model customer heterogeneity and design team's preference. Due to its easiness, our approach can reduce the cognitive burden of QFD planning team and give a practical convenience in QFD planning. Extensions to the proposed approach are also given to address application contexts involving a wider set of HOQ elements. (C) 2014 Elsevier B.V. All rights reserved.
A Target-Based Decision-Making Approach to Consumer-Oriented Evaluation Model for Japanese Traditional Crafts This paper deals with the evaluation of Japanese traditional crafts, in which product items are assessed according to the so-called “Kansei” features by means of the semantic differential method. For traditional crafts, decisions on which items to buy or use are usually influenced by personal feelings/characteristics; therefore, we shall propose a consumer-oriented evaluation model targeting these specific requests by consumers. Particularly, given a consumer's request, the proposed model aims to define an evaluation function that quantifies how well a product item meets the consumer's feeling preferences. An application to evaluating patterns of Kutani porcelain is conducted to illustrate how the proposed evaluation model works, in practice.
On prioritized weighted aggregation in multi-criteria decision making This paper deals with multi-criteria decision making (MCDM) problems with multiple priorities, in which priority weights associated with the lower priority criteria are related to the satisfactions of the higher priority criteria. Firstly, we propose a prioritized weighted aggregation operator based on ordered weighted averaging (OWA) operator and triangular norms (t-norms). To preserve the tradeoffs among the criteria in the same priority level, we suggest that the degree of satisfaction regarding each priority level is viewed as a pseudo criterion. On the other hand, t-norms are used to model the priority relationships between the criteria in different priority levels. In particular, we show that strict Archimedean t-norms perform better in inducing priority weights. As Hamacher family of t-norms provide a wide class of strict Archimedean t-norms ranging from the product to weakest t-norm, Hamacher parameterized t-norms are used to induce the priority weight for each priority level. Secondly, considering decision maker (DM)'s requirement toward higher priority levels, a benchmark based approach is proposed to induce priority weight for each priority level. In particular, Lukasiewicz implication is used to compute benchmark achievement for crisp requirements; target-oriented decision analysis is utilized to obtain the benchmark achievement for fuzzy requirements. Finally, some numerical examples are used to illustrate the proposed prioritized aggregation technique as well as to compare with previous research.
Multi-sample test-based clustering for fuzzy random variables A clustering method to group independent fuzzy random variables observed on a sample by focusing on their expected values is developed. The procedure is iterative and based on the p-value of a multi-sample bootstrap test. Thus, it simultaneously takes into account fuzziness and stochastic variability. Moreover, an objective stopping criterion leading to statistically equal groups different from each other is provided. Some simulations to show the performance of this inferential approach are included. The results are illustrated by means of a case study.
Kansei evaluation based on prioritized multi-attribute fuzzy target-oriented decision analysis This paper deals with Kansei evaluation focusing on consumers' psychological needs and personal taste. To do so, a preparatory study is conducted beforehand to obtain Kansei data of the products to be evaluated, in which products are assessed according to Kansei attributes by means of the semantic differential method and linguistic variables. These Kansei data are then used to generate Kansei profiles for evaluated products by means of the voting statistics. As consumers' preferences on Kansei attributes of products vary from person to person and target-oriented decision analysis provides a good description of individual preference, the target-oriented decision analysis has been used and extended to quantify how well a product meets consumers' preferences. Due to the vagueness and uncertainty of consumers' preferences, three types of fuzzy targets are defined to represent the consumers' preferences. Considering the priority order of Kansei attributes specified by consumers, a so-called prioritized scoring aggregation operator is utilized to aggregate the partial degrees of satisfaction for the evaluated products. As the aesthetic aspect plays a crucial role in human choice of traditional crafts, an application to evaluate Kanazawa gold leaf, a traditional craft in Ishikawa, Japan, has also been provided to illustrate how the proposed model works in practice.
Bootstrap techniques and fuzzy random variables: Synergy in hypothesis testing with fuzzy data In previous studies we have stated that the well-known bootstrap techniques are a valuable tool in testing statistical hypotheses about the means of fuzzy random variables, when these variables are supposed to take on a finite number of different values and these values being fuzzy subsets of the one-dimensional Euclidean space. In this paper we show that the one-sample method of testing about the mean of a fuzzy random variable can be extended to general ones (more precisely, to those whose range is not necessarily finite and whose values are fuzzy subsets of finite-dimensional Euclidean space). This extension is immediately developed by combining some tools in the literature, namely, bootstrap techniques on Banach spaces, a metric between fuzzy sets based on the support function, and an embedding of the space of fuzzy random variables into a Banach space which is based on the support function.
Statistical timing based on incomplete probabilistic descriptions of parameter uncertainty Existing approaches to timing analysis under uncertainty are based on restrictive assumptions. Statistical STA techniques assume that the full probabilistic distribution of parameter uncertainty is available; in reality, the complete probabilistic description often cannot be obtained. In this paper, a new paradigm for parameter uncertainty description is proposed as a way to consistently and rigorously handle partially available descriptions of parameter uncertainty. The paradigm is based on a theory of interval probabilistic models that permit handling uncertainty that is described in a distribution-free mode - just via the range, the mean, and the variance. This permits effectively handling multiple real-life challenges, including imprecise and limited information about the distributions of process parameters, parameters coming from different populations, and the sources of uncertainty that are too difficult to handle via full probabilistic measures (e.g. on-chip supply voltage variation). Specifically, analytical techniques for bounding the distributions of probabilistic interval variables are proposed. Besides, a provably correct strategy for fast Monte Carlo simulation based on probabilistic interval variables is introduced. A path-based timing algorithm implementing the novel modeling paradigm, as well as handling the traditional variability descriptions, has been developed. The results indicate the proposed algorithm can improve the upper bound of the 90(th)-percentile circuit delay, on average, by 5.3% across the ISCAS'85 benchmark circuits, compared to the worst-case timing estimates that use only the interval information of the partially specified parameters.
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
A model based on linguistic 2-tuples for dealing with multigranular hierarchical linguistic contexts in multi-expert decision-making In those problems that deal with multiple sources of linguistic information we can find problems defined in contexts where the linguistic assessments are assessed in linguistic term sets with different granularity of uncertainty and/or semantics (multigranular linguistic contexts). Different approaches have been developed to manage this type of contexts, that unify the multigranular linguistic information in an unique linguistic term set for an easy management of the information. This normalization process can produce a loss of information and hence a lack of precision in the final results. In this paper, we shall present a type of multigranular linguistic contexts we shall call linguistic hierarchies term sets, such that, when we deal with multigranular linguistic information assessed in these structures we can unify the information assessed in them without loss of information. To do so, we shall use the 2-tuple linguistic representation model. Afterwards we shall develop a linguistic decision model dealing with multigranular linguistic contexts and apply it to a multi-expert decision-making problem
Completeness and consistency conditions for learning fuzzy rules The completeness and consistency conditions were introduced in order to achieve acceptable concept recognition rules. In real problems, we can handle noise-affected examples and it is not always possible to maintain both conditions. Moreover, when we use fuzzy information there is a partial matching between examples and rules, therefore the consistency condition becomes a matter of degree. In this paper, a learning algorithm based on soft consistency and completeness conditions is proposed. This learning algorithm combines in a single process rule and feature selection and it is tested on different databases. (C) 1998 Elsevier Science B.V. All rights reserved.
On proactive perfectly secure message transmission This paper studies the interplay of network connectivity and perfectly secure message transmission under the corrupting influence of a Byzantine mobile adversary that may move from player to player but can corrupt no more than t players at any given time. It is known that, in the stationary adversary model where the adversary corrupts the same set of t players throughout the protocol, perfectly secure communication among any pair of players is possible if and only if the underlying synchronous network is (2t + 1)-connected. Surprisingly, we show that (2t + 1)-connectivity is sufficient (and of course, necessary) even in the proactive (mobile) setting where the adversary is allowed to corrupt different sets of t players in different rounds of the protocol. In other words, adversarial mobility has no effect on the possibility of secure communication. Towards this, we use the notion of a Communication Graph, which is useful in modelling scenarios with adversarial mobility. We also show that protocols for reliable and secure communication proposed in [15] can be modified to tolerate the mobile adversary. Further these protocols are round-optimal if the underlying network is a collection of disjoint paths from the sender S to receiver R.
Independent systems of representatives in weighted graphs The following conjecture may have never been explicitly stated, but seems to have been floating around: if the vertex set of a graph with maximal degree Δ is partitioned into sets V i of size 2Δ, then there exists a coloring of the graph by 2Δ colors, where each color class meets each V i at precisely one vertex. We shall name it the strong 2Δ-colorability conjecture. We prove a fractional version of this conjecture. For this purpose, we prove a weighted generalization of a theorem of Haxell, on independent systems of representatives (ISR’s). En route, we give a survey of some recent developments in the theory of ISR’s.
A Strategic Benchmarking Process For Identifying The Best Practice Collaborative Electronic Government Architecture The rapid growth of the Internet has given rise to electronic government (e-government) which enhances communication, coordination, and collaboration between government, business partners, and citizens. An increasing number of national, state, and local government agencies are realizing the benefits of e-government. The transformation of policies, procedures, and people, which is the essence of e-government, cannot happen by accident. An e-government architecture is needed to structure the system, its functions, its processes, and the environment within which it will live. When confronted by the range of e-government architectures, government agencies struggle to identify the one most appropriate to their needs. This paper proposes a novel strategic benchmarking process utilizing the simple additive weighting method (SAW), real options analysis (ROA), and fuzzy sets to benchmark the best practice collaborative e-government architectures based on three perspectives: Government-to-Citizen (G2C), Government-to-Business (G2B), and Government-to-Government (G2G). The contribution of the proposed method is fourfold: (1) it addresses the gaps in the e-government literature on the effective and efficient assessment of the e-government architectures; (2) it provides a comprehensive and systematic framework that combines ROA with SAW; (3) it considers fuzzy logic and fuzzy sets to represent ambiguous, uncertain or imprecise information; and (4) it is applicable to international, national, Regional, state/provincial, and local e-government levels.
Performance and Quality Evaluation of a Personalized Route Planning System Advanced personalization of database applications is a big challenge, in particular for distributed mo- bile environments. We present several new results from a prototype of a route planning system. We demonstrate how to combine qualitative and quantitative preferences gained from situational aspects and from personal user preferences. For performance studies we a nalyze the runtime efficiency of the SR-Combine algorithm used to evaluate top-k queries. By determining the cost-ratio of random to sorted accesses SR-Combine can automati- cally tune its performance within the given system architecture. Top-k queries are generated by mapping linguis- tic variables to numerical weightings. Moreover, we analyze the quality of the query results by several test se- ries, systematically varying the mappings of the linguistic variables. We report interesting insights into this rather under-researched important topic. More investigations, incorporating also cognitive issues, need to be conducted in the future.
1.066667
0.066667
0.066667
0.066667
0.033333
0.003333
0
0
0
0
0
0
0
0
Asymptotic-preserving methods for hyperbolic and transport equations with random inputs and diffusive scalings In this paper we develop a set of stochastic numerical schemes for hyperbolic and transport equations with diffusive scalings and subject to random inputs. The schemes are asymptotic preserving (AP), in the sense that they preserve the diffusive limits of the equations in discrete setting, without requiring excessive refinement of the discretization. Our stochastic AP schemes are extensions of the well-developed deterministic AP schemes. To handle the random inputs, we employ generalized polynomial chaos (gPC) expansion and combine it with stochastic Galerkin procedure. We apply the gPC Galerkin scheme to a set of representative hyperbolic and transport equations and establish the AP property in the stochastic setting. We then provide several numerical examples to illustrate the accuracy and effectiveness of the stochastic AP schemes.
Uniform Regularity for Linear Kinetic Equations with Random Input Based on Hypocoercivity In this paper we study the effect of randomness in kinetic equations that preserve mass. Our focus is in proving the analyticity of the solution with respect to the randomness, which naturally leads to the convergence of numerical methods. The analysis is carried out in a general setting, with the regularity result not depending on the specific form of the collision term, the probability distribution of the random variables, or the regime the system is in and thereby is termed "uniform." Applications include the linear Boltzmann equation, the Bhatnagar-Gross-Krook (BGK) model, and the Carlemann model, among many others, and the results hold true in kinetic, parabolic, and high field regimes. The proof relies on the explicit expression of the high order derivatives of the solution in the random space, and the convergence in time is mainly based on hypocoercivity, which, despite the popularity in PDE analysis of kinetic theory, has rarely been used for numerical algorithms.
Exploring the Locally Low Dimensional Structure in Solving Random Elliptic PDEs. We propose a stochastic multiscale finite element method (StoMsFEM) to solve random elliptic partial differential equations with a high stochastic dimension. The key idea is to simultaneously upscale the stochastic solutions in the physical space for all random samples and explore the low stochastic dimensions of the stochastic solution within each local patch. We propose two effective methods for achieving this simultaneous local upscaling. The first method is a high order interpolation method in the stochastic space that explores the high regularity of the local upscaled quantities with respect to the random variables. The second method is a reduced-order method that explores the low rank property of the multiscale basis functions within each coarse grid patch. Our complexity analysis shows that, compared with the standard FEM on a fine grid, the StoMsFEM can achieve computational savings on the order of (H/h)(d) / (log(H/h))(k), where H/h is the ratio between the coarse and the fine grid sizes, d is the physical dimension, and k is the local stochastic dimension. Several numerical examples are presented to demonstrate the accuracy and effectiveness of the proposed methods. In the high contrast example, we observe a factor of 2000 speed-up.
Stochastic finite element methods for partial differential equations with random input data The quantification of probabilistic uncertainties in the outputs of physical, biological, and social systems governed by partial differential equations with random inputs require, in practice, the discretization of those equations. Stochastic finite element methods refer to an extensive class of algorithms for the approximate solution of partial differential equations having random input data, for which spatial discretization is effected by a finite element method. Fully discrete approximations require further discretization with respect to solution dependences on the random variables. For this purpose several approaches have been developed, including intrusive approaches such as stochastic Galerkin methods, for which the physical and probabilistic degrees of freedom are coupled, and non-intrusive approaches such as stochastic sampling and interpolatory-type stochastic collocation methods, for which the physical and probabilistic degrees of freedom are uncoupled. All these method classes are surveyed in this article, including some novel recent developments. Details about the construction of the various algorithms and about theoretical error estimates and complexity analyses of the algorithms are provided. Throughout, numerical examples are used to illustrate the theoretical results and to provide further insights into the methodologies.
A Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data This work proposes and analyzes a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems as in the Monte Carlo method. If the number of random variables needed to describe the input data is moderately large, full tensor product spaces are computationally expensive to use due to the curse of dimensionality. In this case the sparse grid approach is still expected to be competitive with the classical Monte Carlo method. Therefore, it is of major practical relevance to understand in which situations the sparse grid stochastic collocation method is more efficient than Monte Carlo. This work provides error estimates for the fully discrete solution using $L^q$ norms and analyzes the computational efficiency of the proposed method. In particular, it demonstrates algebraic convergence with respect to the total number of collocation points and quantifies the effect of the dimension of the problem (number of input random variables) in the final estimates. The derived estimates are then used to compare the method with Monte Carlo, indicating for which problems the former is more efficient than the latter. Computational evidence complements the present theory and shows the effectiveness of the sparse grid stochastic collocation method compared to full tensor and Monte Carlo approaches.
Detecting Faces in Images: A Survey Images containing faces are essential to intelligent vision-based human computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation, and expression recognition. However, many reported methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. Given a single image, the goal of face detection is to identify all image regions which contain a face regardless of its three-dimensional position, orientation, and lighting conditions. Such a problem is challenging because faces are nonrigid and have a high degree of variability in size, shape, color, and texture. Numerous techniques have been developed to detect faces in a single image, and the purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics, and benchmarking. After analyzing these algorithms and identifying their limitations, we conclude with several promising directions for future research.
Dummynet: a simple approach to the evaluation of network protocols Network protocols are usually tested in operational networks or in simulated environments. With the former approach it is not easy to set and control the various operational parameters such as bandwidth, delays, queue sizes. Simulators are easier to control, but they are often only an approximate model of the desired setting, especially for what regards the various traffic generators (both producers and consumers) and their interaction with the protocol itself.In this paper we show how a simple, yet flexible and accurate network simulator - dummynet - can be built with minimal modifications to an existing protocol stack, allowing experiments to be run on a standalone system. dummynet works by intercepting communications of the protocol layer under test and simulating the effects of finite queues, bandwidth limitations and communication delays. It runs in a fully operational system, hence allowing the use of real traffic generators and protocol implementations, while solving the problem of simulating unusual environments. With our tool, doing experiments with network protocols is as simple as running the desired set of applications on a workstation.A FreeBSD implementation of dummynet, targeted to TCP, is available from the author. This implementation is highly portable and compatible with other BSD-derived systems, and takes less than 300 lines of kernel code.
Compressed Sensing for Networked Data Imagine a system with thousands or millions of independent components, all capable of generating and communicating data. A man-made system of this complexity was unthinkable a few decades ago, but today it is a reality - computers, cell phones, sensors, and actuators are all linked to the Internet, and every wired or wireless device is capable of generating and disseminating prodigious volumes of data. This system is not a single centrally-controlled device, rather it is an ever-growing patchwork of autonomous systems and components, perhaps more organic in nature than any human artifact that has come before. And we struggle to manage and understand this creation, which in many ways has taken on a life of its own. Indeed, several international conferences are dedicated to the scientific study of emergent Internet phenomena. This article considers a particularly salient aspect of this struggle that revolves around large- scale distributed sources of data and their storage, transmission, and retrieval. The task of transmitting information from one point to another is a common and well-understood exercise. But the problem of efficiently transmitting or sharing information from and among a vast number of distributed nodes remains a great challenge, primarily because we do not yet have well developed theories and tools for distributed signal processing, communications, and information theory in large-scale networked systems. The problem is illustrated by a simple example. Consider a network of n nodes, each having a piece of information or data xj, j = 1,...,n. These data could be files to be shared, or simply scalar values corresponding to node attributes or sensor measurements. Let us assume that each xj is a scalar quantity for the sake of this illustration. Collectively these data x = (x1,...,xn)T, arranged in a vector, are called networked data to emphasize both the distributed nature of the data and the fact that they may be shared over the underlying communications infrastructure of the network. The networked data vector may be very large; n may be a thousand or a million or more.
Reconstruction of a low-rank matrix in the presence of Gaussian noise. This paper addresses the problem of reconstructing a low-rank signal matrix observed with additive Gaussian noise. We first establish that, under mild assumptions, one can restrict attention to orthogonally equivariant reconstruction methods, which act only on the singular values of the observed matrix and do not affect its singular vectors. Using recent results in random matrix theory, we then propose a new reconstruction method that aims to reverse the effect of the noise on the singular value decomposition of the signal matrix. In conjunction with the proposed reconstruction method we also introduce a Kolmogorov–Smirnov based estimator of the noise variance.
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
Compressive sampling for streaming signals with sparse frequency content Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Stream- ing Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and im- plementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for real-time system im- plementations. Our experiment results on very long signals demonstrate the good performance of the SGP and validate our approach.
Directional relative position between objects in image processing: a comparison between fuzzy approaches The importance of describing relationships between objects has been highlighted in works in very different areas, including image understanding. Among these relationships, directional relative position relations are important since they provide an important information about the spatial arrangement of objects in the scene. Such concepts are rather ambiguous, they defy precise definitions, but human beings have a rather intuitive and common way of understanding and interpreting them. Therefore in this context, fuzzy methods are appropriate to provide consistent definitions that integrate both quantitative and qualitative knowledge, thus providing a computational representation and interpretation of imprecise spatial relations, expressed in a linguistic way, and including quantitative knowledge. Several fuzzy approaches have been developed in the literature, and the aim of this paper is to review and compare them according to their properties and according to the types of questions they seek to answer.
Fuzzy modeling of system behavior for risk and reliability analysis The main objective of the article is to permit the reliability analyst's/engineers/managers/practitioners to analyze the failure behavior of a system in a more consistent and logical manner. To this effect, the authors propose a methodological and structured framework, which makes use of both qualitative and quantitative techniques for risk and reliability analysis of the system. The framework has been applied to model and analyze a complex industrial system from a paper mill. In the quantitative framework, after developing the Petrinet model of the system, the fuzzy synthesis of failure and repair data (using fuzzy arithmetic operations) has been done. Various system parameters of managerial importance such as repair time, failure rate, mean time between failures, availability, and expected number of failures are computed to quantify the behavior in terms of fuzzy, crisp and defuzzified values. Further, to improve upon the reliability and maintainability characteristics of the system, in depth qualitative analysis of systems is carried out using failure mode and effect analysis (FMEA) by listing out all possible failure modes, their causes and effect on system performance. To address the limitations of traditional FMEA method based on risky priority number score, a risk ranking approach based on fuzzy and Grey relational analysis is proposed to prioritize failure causes.
Split Bregman iterative algorithm for sparse reconstruction of electrical impedance tomography In this paper, we present an evaluation of the use of split Bregman iterative algorithm for the L"1-norm regularized inverse problem of electrical impedance tomography. Simulations are performed to validate that our algorithm is competitive in terms of the imaging quality and computational speed in comparison with several state-of-the-art algorithms. Results also indicate that in contrast to the conventional L"2-norm regularization method and total variation (TV) regularization method, the L"1-norm regularization method can sharpen the edges and is more robust against data noises.
1.05
0.05
0.05
0.003333
0.000543
0
0
0
0
0
0
0
0
0
A new evaluation model for intellectual capital based on computing with linguistic variable In a knowledge era, intellectual capital has become a determinant resource for enterprise to retain and improve competitive advantage. Because the nature of intellectual capital is abstract, intangible, and difficult to measure, it becomes a challenge for business managers to evaluate intellectual capital performance effectively. Recently, several methods have been proposed to assist business managers in evaluating performance of intellectual capital. However, they also face information loss problems while the processes of subjective evaluation integration. Therefore, this paper proposes a suitable model for intellectual capital performance evaluation by combining 2-tuple fuzzy linguistic approach with multiple criteria decision-making (MCDM) method. It is feasible to manipulate the processes of evaluation integration and avoid the information loss effectively. Based on the proposed model, its feasibility is demonstrated by the result of intellectual capital performance evaluation for a high-technology company in Taiwan.
An Integrated Methodology Using Linguistic Promethee And Maximum Deviation Method For Third-Party Logistics Supplier Selection The purpose of this paper is to present a framework and a suitable method for selecting the best logistics supplier. In general, many quantitative and qualitative criteria should be considered simultaneously for making the decision of logistics supplier selection. The information about judging the performance of logistics suppliers will come from customers' opinions, experts' opinions and the operational data in the real environment. Under this situation, the selection problem of logistic suppliers will be the uncertainties and fuzziness problems in the decision making process. Therefore, we combined the linguistic PROMETHEE method with maximum deviation method to determine the ranking order of logistics suppliers. And then, an example is implemented to demonstrate the practicability of the proposed method. Finally, some conclusions are discussed at the end of this paper.
Concept Representation and Database Structures in Fuzzy Social Relational Networks We discuss the idea of fuzzy relationships and their role in modeling weighted social relational networks. The paradigm of computing with words is introduced, and the role that fuzzy sets play in representing linguistic concepts is described. We discuss how these technologies can provide a bridge between a network analyst's linguistic description of social network concepts and the formal model of the network. We then turn to some examples of taking an analyst's network concepts and formally representing them in terms of network properties. We first do this for the concept of clique and then for the idea of node importance. Finally, we introduce the idea of vector-valued nodes and begin developing a technology of social network database theory.
Lama: A Linguistic Aggregation Of Majority Additive Operator A problem that we had encountered in the aggregation process, is how to aggregate the elements that have cardinality > 1. The purpose of this article is to present a new aggregation operator of linguistic labels that uses the cardinality of these elements, the linguistic aggregation of majority additive (LAMA) operator. We also present an extension of the LAMA operator under the two-tuple fuzzy linguistic representation model. (C) 2003 Wiley Periodicals, Inc.
Team Situation Awareness Using Web-Based Fuzzy Group Decision Support Systems Situation awareness (SA) is an important element to support responses and decision making to crisis problems. Decision making for a complex situation often needs a team to work cooperatively to get consensus awareness for the situation. Team SA is characterized including information sharing, opinion integration and consensus SA generation. In the meantime, various uncertainties are involved in team SA during information collection and awareness generation. Also, the collaboration between team members may be across distances and need web-based technology to facilitate. This paper presents a web-based fuzzy group decision support system (WFGDSS) and demonstrates how this system can provide a means of support for generating team SA in a distributed team work context with the ability of handling uncertain information.
Intelligent multi-criteria fuzzy group decision-making for situation assessments Organizational decisions and situation assessment are often made in groups, and decision and assessment processes involve various uncertain factors. To increase efficiently group decision-making, this study presents a new rational---political model as a systematic means of supporting group decision-making in an uncertain environment. The model takes advantage of both rational and political models and can handle inconsistent assessment, incomplete information and inaccurate opinions in deriving the best solution for the group decision under a sequential framework. The model particularly identifies three uncertain factors involved in a group decision-making process: decision makers' roles, preferences for alternatives, and judgments for assessment-criteria. Based on this model, an intelligent multi-criteria fuzzy group decision-making method is proposed to deal with the three uncertain factors described by linguistic terms. The proposed method uses general fuzzy numbers and aggregates these factors into a group satisfactory decision that is in a most acceptable degree of the group. Inference rules are particularly introduced into the method for checking the consistence of individual preferences. Finally, a real case-study on a business situation assessment is illustrated by the proposed method.
A recommender system for research resources based on fuzzy linguistic modeling Nowadays, the increasing popularity of Internet has led to an abundant amount of information created and delivered over electronic media. It causes the information access by the users is a complex activity and they need tools to assist them to obtain the required information. Recommender systems are tools whose objective is to evaluate and filter the great amount of information available in a specific scope to assist the users in their information access processes. Another obstacle is the great variety of representations of information, specially when the users take part in the process, so we need more flexibility in the information processing. The fuzzy linguistic modeling allows to represent and handle flexible information. Similar problems are appearing in other frameworks, such as digital academic libraries, research offices, business contacts, etc. We focus on information access processes in technology transfer offices. The aim of this paper is to develop a recommender system for research resources based on fuzzy linguistic modeling. The system helps researchers and environment companies allowing them to obtain automatically information about research resources (calls or projects) in their interest areas. It is designed using some filtering tools and a particular fuzzy linguistic modeling, called multi-granular fuzzy linguistic modeling, which is useful when we have to assess different qualitative concepts. The system is working in the University of Granada and experimental results show that it is feasible and effective.
Combining numerical and linguistic information in group decision making People give information about their personal preferences in many different ways, de- pending on their background. This paper deals with group decision making problems in which the solution depends on information of a different nature, i.e., assuming that the experts express their preferences with numerical or linguistic values. The aim of this pa- per is to present a proposal for this problem. We introduce a fusion operator for numer- ical and linguistic information. This operator combines linguistic values (assessed in the same label set) with numerical ones (assessed in the interval (0,1)). It is based on two transformation methods between numerical and linguistic values, which are defined using the concept of the characteristic values proposed in this paper. Its application to group decision making problems is illustrated by means of a particular fusion oper- ator guided by fuzzy majority. Considering that the experts express their opinions by means of fuzzy or linguistic preference relations, this operator is used to develop a choice process for the alternatives, allowing solutions to be obtained in line with the ma- jority of the experts' opinions. © 1998 Elsevier Science Inc. All rights reserved.
Preference Modelling ABSTRACT This paper provides the reader with a presentation of preference modelling fundamental notions as well as some recent results in this field. Preference modelling is an inevitable step in a variety of fields: economy, sociology, psychology, mathematical programming, even medicine, archaeology, and obviously decision analysis. Our notation and some basic definitions, such as those of binary relation, properties and ordered sets, are presented at the beginning of the paper. We start by discussing different reasons for constructing a model or preference. We then go through a number,of issues that influence the construction of preference models. Different formalisations besides classical logic such as fuzzy sets and non-classical logics become,necessary. We then present different types of preference structures reflecting the behavior of a decision-maker: classical, extended and valued ones. It is relevant to have a numerical representation of preferences: functional representations, value functions. The concepts of thresholds and minimal representation are also introduced in this section. In section 7, we briefly explore the concept of deontic logic (logic of preference) and other formalisms associated with "compact representation of preferences" introduced for spe-
Accuracy and complexity evaluation of defuzzification strategies for the discretised interval type-2 fuzzy set. The work reported in this paper addresses the challenge of the efficient and accurate defuzzification of discretised interval type-2 fuzzy sets. The exhaustive method of defuzzification for type-2 fuzzy sets is extremely slow, owing to its enormous computational complexity. Several approximate methods have been devised in response to this bottleneck. In this paper we survey four alternative strategies for defuzzifying an interval type-2 fuzzy set: (1) The Karnik-Mendel Iterative Procedure, (2) the Wu-Mendel Approximation, (3) the Greenfield-Chiclana Collapsing Defuzzifier, and (4) the Nie-Tan Method.We evaluated the different methods experimentally for accuracy, by means of a comparative study using six representative test sets with varied characteristics, using the exhaustive method as the standard. A preliminary ranking of the methods was achieved using a multicriteria decision making methodology based on the assignment of weights according to performance. The ranking produced, in order of decreasing accuracy, is (1) the Collapsing Defuzzifier, (2) the Nie-Tan Method, (3) the Karnik-Mendel Iterative Procedure, and (4) the Wu-Mendel Approximation.Following that, a more rigorous analysis was undertaken by means of the Wilcoxon Nonparametric Test, in order to validate the preliminary test conclusions. It was found that there was no evidence of a significant difference between the accuracy of the collapsing and Nie-Tan Methods, and between that of the Karnik-Mendel Iterative Procedure and the Wu-Mendel Approximation. However, there was evidence to suggest that the collapsing and Nie-Tan Methods are more accurate than the Karnik-Mendel Iterative Procedure and the Wu-Mendel Approximation.In relation to efficiency, each method's computational complexity was analysed, resulting in a ranking (from least computationally complex to most computationally complex) as follows: (1) the Nie-Tan Method, (2) the Karnik-Mendel Iterative Procedure (lowest complexity possible), (3) the Greenfield-Chiclana Collapsing Defuzzifier, (4) the Karnik-Mendel Iterative Procedure (highest complexity possible), and (5) the Wu-Mendel Approximation. (C) 2013 Elsevier Inc. All rights reserved.
Fuzzy Reasoning Based On The Extension Principle According to the operation of decomposition (also known as representation theorem) (Negoita CV, Ralescu, DA. Kybernetes 1975;4:169-174) in fuzzy set theory, the whole fuzziness of an object can be characterized by a sequence of local crisp properties of that object. Hence, any fuzzy reasoning could also be implemented by using a similar idea, i.e., a sequence of precise reasoning. More precisely, we could translate a fuzzy relation "lf A then B" of the Generalized Modus Ponens Rule (the most common and widely used interpretation of a fuzzy rule, A, B, are fuzzy sets in a universe of discourse X, and of discourse Y, respectively) into a corresponding precise relation between a subset of P(X) and a subset of P(Y), and then extend this corresponding precise relation to two kinds of transformations between all L-type fuzzy subsets of X and those of Y by using Zadeh's extension principle, where L denotes a complete lattice. In this way, we provide an alternative approach to the existing compositional rule of inference, which performs fuzzy reasoning based on the extension principle. The approach does not depend on the choice of fuzzy implication operator nor on the choice of a t-norm. The detailed reasoning methods, applied in particular to the Generalized Modus Ponens and the Generalized Modus Tollens, are established and their properties are further investigated in this paper. (C) 2001 John Wiley & Sons, Inc.
Load Balancing in Quorum Systems This paper introduces and studies the question of balancing the load on processors participating in a given quorum system. Our proposed measure for the degree of balancing is the ratio between the load on the least frequently referenced element and on the most frequently used one.We give some simple sufficient and necessary conditions for perfect balancing. We then look at the balancing properties of the common class of voting systems and prove that every voting system with odd total weight is perfectly balanced. (This holds, in fact, for the more general class of ordered systems.)We also give some characterizations for the balancing ratio in the worst case. It is shown that for any quorum system with a universe of size $n$, the balancing ratio is no smaller than $1/(n-1)$, and this bound is the best possible. When restricting attention to nondominated coteries (NDCs), the bound becomes $2/\bigl(n-\log_2 n+o(\log n)\bigr)$, and there exists an NDC with ratio $2/\bigl(n-\log_2 n-o(\log n)\bigr)$.Next, we study the interrelations between the two basic parameters of load balancing and quorum size. It turns out that the two size parameters suitable for our investigation are the size of the largest quorum and the optimally weighted average quorum size(OWAQS) of the system. For the class of ordered NDCs (for which perfect balancing is guaranteed), it is shown that over a universe of size $n$, some quorums of size $\lceil(n+1)/2\rceil$ or more must exist (and this bound is the best possible). A similar lower bound holds for the OWAQS measure if we restrict attention to voting systems. For nonordered systems, perfect balancing can sometimes be achieved with much smaller quorums. A lower bound of $\Omega(\sqrt{n})$ is established for the maximal quorum size and the OWAQS of any perfectly balanced quorum system over $n$ elements, and this bound is the best possible.Finally, we turn to quorum systems that cannot be perfectly balanced, but have some balancing ratio $0
An Almost Optimal Unrestricted Fast Johnson-Lindenstrauss Transform The problems of random projections and sparse reconstruction have much in common and individually received much attention. Surprisingly, until now they progressed in parallel and remained mostly separate. Here, we employ new tools from probability in Banach spaces that were successfully used in the context of sparse reconstruction to advance on an open problem in random pojection. In particular, we generalize and use an intricate result by Rudelson and Vershynin for sparse reconstruction which uses Dudley's theorem for bounding Gaussian processes. Our main result states that any set of $N = \exp(\tilde{O}(n))$ real vectors in $n$ dimensional space can be linearly mapped to a space of dimension $k=O(\log N\polylog(n))$, while (1) preserving the pairwise distances among the vectors to within any constant distortion and (2) being able to apply the transformation in time $O(n\log n)$ on each vector. This improves on the best known $N = \exp(\tilde{O}(n^{1/2}))$ achieved by Ailon and Liberty and $N = \exp(\tilde{O}(n^{1/3}))$ by Ailon and Chazelle. The dependence in the distortion constant however is believed to be suboptimal and subject to further investigation. For constant distortion, this settles the open question posed by these authors up to a $\polylog(n)$ factor while considerably simplifying their constructions.
Evaluating process performance based on the incapability index for measurements with uncertainty Process capability indices are widely used in industry to measure the ability of firms or their suppliers to meet quality specifications. The index C"P"P, which is easy to use and analytically tractable, has been successfully developed and applied by competitive firms to dominate highly-profitable markets by improving quality and productivity. Hypothesis testing is very essential for practical decision-making. Generally, the underlying data are assumed to be precise numbers, but in general it is much more realistic to consider fuzzy values, which are imprecise numbers. In this case, the test statistic also yields an imprecise number, and decision rules based on the crisp-based approach are inappropriate. This study investigates the situation of uncertain or imprecise product quality measurements. A set of confidence intervals for sample mean and variance is used to produce triangular fuzzy numbers for estimating the C"P"P index. Based on the @d-cuts of the fuzzy estimators, a decision testing rule and procedure are developed to evaluate process performance based on critical values and fuzzy p-values. An efficient computer program is also designed for calculating fuzzy p-values. Finally, an example is examined for demonstrating the application of the proposed approach.
1.035922
0.033969
0.017043
0.014872
0.012758
0.007587
0.004403
0.001324
0.000183
0.000029
0.000001
0
0
0
Merging distributed database summaries The database summarization system coined SaintEtiQ provides multi-resolution summaries of structured data stored into acentralized database. Summaries are computed online with a conceptual hierarchical clustering algorithm. However, most companies work in distributed legacy environments and consequently the current centralized version of SaintEtiQ is either not feasible (privacy preserving) or not desirable (resource limitations). To address this problem, we propose new algorithms to generate a single summary hierarchy given two distinct hierarchies, without scanning the raw data. The Greedy Merging Algorithm (GMA) takes all leaves of both hierarchies and generates the optimal partitioning for the considered data set with regards to a cost function (compactness and separation). Then, a hierarchical organization of summaries is built by agglomerating or dividing clusters such that the cost function may emphasize local or global patterns in the data. Thus, we obtain two different hierarchies according to the performed optimisation. However, this approach breaks down due to its exponential time complexity. Two alternative approaches with constant time complexity w.r.t. the number of data items, are proposed to tackle this problem. The first one, called Merge by Incorporation Algorithm (MIA), relies on the SaintEtiQ engine whereas the second approach, named Merge by Alignment Algorithm (MAA), consists in rearranging summaries by levels in a top-down manner. Then, we compare those approaches using an original quality measure in order to quantify how good our merged hierarchies are. Finally, an experimental study, using real data sets, shows that merging processes (MIA and MAA) are efficient in terms of computational time.
Querying the SaintEtiQ Summaries - A First Attempt For some years, data summarization techniques have been developed to handle the growth of databases. However these techniques are usually not provided with tools for end-users to efficiently use the produced summaries. This paper presents a first attempt to develop a querying tool for the SAINTETIQ summarization model. The proposed search algorithm takes advantage of the hierarchical structure of the SAINTETIQ summaries to efficiently answer questions such as "how are, on some attributes, the tuples which have specific characteristics?". Moreover, this algorithm can be seen both as a boolean querying mechanism over a hierarchy of summaries, and as a flexible querying mechanism over the underlying relational tuples.
SAINTETIQ: a fuzzy set-based approach to database summarization In this paper, a new approach to database summarization is introduced through our model named SAINTETIQ. Based on a hierarchical conceptual clustering algorithm, SAINTETIQ incrementally builds a summary hierarchy from database records. Furthermore, the fuzzy set-based representation of data allows to handle vague, uncertain or imprecise information, as well as to improve accuracy and robustness of the construction process of summaries. Finally, background knowledge provides a user-defined vocabulary to synthesize and to make highly intelligible the summary descriptions.
Fuzzy Sets
General formulation of formal grammars By extracting the basic properties common to the formal grammars appeared in existing literatures, we develop a general formulation of formal grammars. We define a pseudo grammar and derive from it the well-known probabilistic, fuzzy grammars and so on. Moreover, several interesting grammars such as ⊔∗ grammars, ⊔ ⊓ grammars, ⊔ ⊓ grammars, composite B-fuzzy grammars, and mixed fuzzy grammars, which have never appeared in any other papers before, are derived.
Fuzzy Algorithms
First-order incremental block-based statistical timing analysis Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities can then be targeted by manual or automatic optimization methods to improve the robustness of the design. This paper also reports the first incremental statistical timer in the literature which is suitable for use in the inner loop of physical synthesis or other optimization programs. The third novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in CPU time, the probability of each edge or node of the timing graph being critical is computed. Numerical results are presented on industrial ASIC chips with over two million logic gates.
A Multilinear Singular Value Decomposition We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higher-order tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, first-order perturbation effects, etc., are analyzed. We investigate how tensor symmetries affect the decomposition and propose a multilinear generalization of the symmetric eigenvalue decomposition for pair-wise symmetric tensors.
A model based on linguistic 2-tuples for dealing with multigranular hierarchical linguistic contexts in multi-expert decision-making In those problems that deal with multiple sources of linguistic information we can find problems defined in contexts where the linguistic assessments are assessed in linguistic term sets with different granularity of uncertainty and/or semantics (multigranular linguistic contexts). Different approaches have been developed to manage this type of contexts, that unify the multigranular linguistic information in an unique linguistic term set for an easy management of the information. This normalization process can produce a loss of information and hence a lack of precision in the final results. In this paper, we shall present a type of multigranular linguistic contexts we shall call linguistic hierarchies term sets, such that, when we deal with multigranular linguistic information assessed in these structures we can unify the information assessed in them without loss of information. To do so, we shall use the 2-tuple linguistic representation model. Afterwards we shall develop a linguistic decision model dealing with multigranular linguistic contexts and apply it to a multi-expert decision-making problem
User profiles and fuzzy logic for web retrieval issues We present a study of the role of user profiles using fuzzy logic in web retrieval processes. Flexibility for user interaction and for adaptation in profile construction becomes an important issue. We focus our study on user profiles, including creation, modification, storage, clustering and interpretation. We also consider the role of fuzzy logic and other soft computing techniques to improve user profiles. Extended profiles contain additional information related to the user that can be used to personalize and customize the retrieval process as well as the web site. Web mining processes can be carried out by means of fuzzy clustering of these extended profiles and fuzzy rule construction. Fuzzy inference can be used in order to modify queries and extract knowledge from profiles with marketing purposes within a web framework. An architecture of a portal that could support web mining technology is also presented.
Fast image recovery using variable splitting and constrained optimization We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Heden's bound on maximal partial spreads We prove Heden's result that the deficiency δ of a maximal partial spread in PG(3, q ) is greater than 1 + ½ (1+√5)√ q unless δ−1 is a multiple of p , where q=p n . When q is odd and not a square, we are able to improve this lower bound to roughly √3 q .
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.105263
0.033333
0.017822
0.001021
0.000008
0.000003
0
0
0
0
0
0
0
0
Sparse Algorithms are not Stable: A No-free-lunch Theorem. We consider two desired properties of learning algorithms: *sparsity* and *algorithmic stability*. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: a sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that $\ell_1$-regularized regression (Lasso) cannot be stable, while $\ell_2$-regularized regression is known to have strong stability properties and is therefore not sparse.
Robust Regression and Lasso Lasso, or l1 regularized least squares, has been explored extensively for its remarkable sparsity properties. In this paper it is shown that the solution to Lasso, in addition to its sparsity, has robustness properties: it is the solution to a robust optimization problem. This has two important consequences. First, robustness provides a connection of the regularizer to a physical property, namely, protection from noise. This allows a principled selection of the regularizer, and in particular, generalizations of Lasso that also yield convex optimization problems are obtained by considering different uncertainty sets. Second, robustness can itself be used as an avenue for exploring different properties of the solution. In particular, it is shown that robustness of the solution explains why the solution is sparse. The analysis as well as the specific results obtained differ from standard sparsity results, providing different geometric intuition. Furthermore, it is shown that the robust optimization formulation is related to kernel density estimation, and based on this approach, a proof that Lasso is consistent is given, using robustness directly. Finally, a theorem is proved which states that sparsity and algorithmic stability contradict each other, and hence Lasso is not stable.
On sparse representations in arbitrary redundant bases The purpose of this contribution is to generalize some recent results on sparse representations of signals in redundant bases. The question that is considered is the following: given a matrix A of dimension (n,m) with mn and a vector b=Ax, find a sufficient condition for b to have a unique sparsest representation x as a linear combination of columns of A. Answers to this question are known when A is the concatenation of two unitary matrices and either an extensive combinatorial search is performed or a linear program is solved. We consider arbitrary A matrices and give a sufficient condition for the unique sparsest solution to be the unique solution to both a linear program or a parametrized quadratic program. The proof is elementary and the possibility of using a quadratic program opens perspectives to the case where b=Ax+e with e a vector of noise or modeling errors.
Greed is good: algorithmic results for sparse approximation This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms.
Sparse representations in unions of bases The purpose of this correspondence is to generalize a result by Donoho and Huo and Elad and Bruckstein on sparse representations of signals in a union of two orthonormal bases for RN. We consider general (redundant) dictionaries for RN, and derive sufficient conditions for having unique sparse representations of signals in such dictionaries. The special case where the dictionary is given by the union of L≥2 orthonormal bases for RN is studied in more detail. In particular, it is proved that the result of Donoho and Huo, concerning the replacement of the ℓ0 optimization problem with a linear programming problem when searching for sparse representations, has an analog for dictionaries that may be highly redundant.
Explicit cost bounds of algorithms for multivariate tensor product problems We study multivariate tensor product problems in the worst case and average casesettings. They are defined on functions of d variables. For arbitrary d, we provideexplicit upper bounds on the costs of algorithms which compute an &quot;-approximationto the solution. The cost bounds are of the form(c(d) + 2) fi 1`fi 2 + fi 3ln 1=&quot;d \Gamma 1" fi 4 (d\Gamma1) `1&quot;" fi 5:Here c(d) is the cost of one function evaluation (or one linear functional evaluation),and fi i "s do not...
The algebra of fuzzy truth values The purpose of this paper is to give a straightforward mathematical treatment of algebras of fuzzy truth values for type-2 fuzzy sets.
Internet of Things (IoT): A vision, architectural elements, and future directions Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
Convergence Rates of Best N-term Galerkin Approximations for a Class of Elliptic sPDEs Deterministic Galerkin approximations of a class of second order elliptic PDEs with random coefficients on a bounded domain D⊂ℝd are introduced and their convergence rates are estimated. The approximations are based on expansions of the random diffusion coefficients in L 2(D)-orthogonal bases, and on viewing the coefficients of these expansions as random parameters y=y(ω)=(y i (ω)). This yields an equivalent parametric deterministic PDE whose solution u(x,y) is a function of both the space variable x∈D and the in general countably many parameters y. We establish new regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to $V=H^{1}_{0}(D)$. These results lead to analytic estimates on the V norms of the coefficients (which are functions of x) in a so-called “generalized polynomial chaos” (gpc) expansion of u. Convergence estimates of approximations of u by best N-term truncated V valued polynomials in the variable y∈U are established. These estimates are of the form N −r , where the rate of convergence r depends only on the decay of the random input expansion. It is shown that r exceeds the benchmark rate 1/2 afforded by Monte Carlo simulations with N “samples” (i.e., deterministic solves) under mild smoothness conditions on the random diffusion coefficients. A class of fully discrete approximations is obtained by Galerkin approximation from a hierarchic family $\{V_{l}\}_{l=0}^{\infty}\subset V$of finite element spaces in D of the coefficients in the N-term truncated gpc expansions of u(x,y). In contrast to previous works, the level l of spatial resolution is adapted to the gpc coefficient. New regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to a smoothness space W⊂V are established leading to analytic estimates on the W norms of the gpc coefficients and on their space discretization error. The space W coincides with $H^{2}(D)\cap H^{1}_{0}(D)$in the case where D is a smooth or convex domain. Our analysis shows that in realistic settings a convergence rate $N_{\mathrm{dof}}^{-s}$in terms of the total number of degrees of freedom N dof can be obtained. Here the rate s is determined by both the best N-term approximation rate r and the approximation order of the space discretization in D.
Uncertainty measures for interval type-2 fuzzy sets Fuzziness (entropy) is a commonly used measure of uncertainty for type-1 fuzzy sets. For interval type-2 fuzzy sets (IT2 FSs), centroid, cardinality, fuzziness, variance and skewness are all measures of uncertainties. The centroid of an IT2 FS has been defined by Karnik and Mendel. In this paper, the other four concepts are defined. All definitions use a Representation Theorem for IT2 FSs. Formulas for computing the cardinality, fuzziness, variance and skewness of an IT2 FS are derived. These definitions should be useful in IT2 fuzzy logic systems design using the principles of uncertainty, and in measuring the similarity between two IT2 FSs.
Stability and Instance Optimality for Gaussian Measurements in Compressed Sensing In compressed sensing, we seek to gain information about a vector x∈ℝN from d ≪ N nonadaptive linear measurements. Candes, Donoho, Tao et al. (see, e.g., Candes, Proc. Intl. Congress Math., Madrid, 2006; Candes et al., Commun. Pure Appl. Math. 59:1207–1223, 2006; Donoho, IEEE Trans. Inf. Theory 52:1289–1306, 2006) proposed to seek a good approximation to x via ℓ 1 minimization. In this paper, we show that in the case of Gaussian measurements, ℓ 1 minimization recovers the signal well from inaccurate measurements, thus improving the result from Candes et al. (Commun. Pure Appl. Math. 59:1207–1223, 2006). We also show that this numerically friendly algorithm (see Candes et al., Commun. Pure Appl. Math. 59:1207–1223, 2006) with overwhelming probability recovers the signal with accuracy, comparable to the accuracy of the best k-term approximation in the Euclidean norm when k∼d/ln N.
On Generalized Induced Linguistic Aggregation Operators In this paper, we define various generalized induced linguistic aggregation operators, including eneralized induced linguistic ordered weighted averaging (GILOWA) operator, generalized induced linguistic ordered weighted geometric (GILOWG) operator, generalized induced uncertain linguistic ordered weighted averaging (GIULOWA) operator, generalized induced uncertain linguistic ordered weighted geometric (GIULOWG) operator, etc. Each object processed by these operators consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are linguistic variables (or uncertain linguistic variables) and then aggregated. It is shown that the induced linguistic ordered weighted averaging (ILOWA) operator and linguistic ordered weighted averaging (LOWA) operator are the special cases of the GILOWA operator, induced linguistic ordered weighted geometric (ILOWG) operator and linguistic ordered weighted geometric (LOWG) operator are the special cases of the GILOWG operator, the induced uncertain linguistic ordered weighted averaging (IULOWA) operator and uncertain linguistic ordered weighted averaging (ULOWA) operator are the special cases of the GIULOWA operator, and that the induced uncertain linguistic ordered weighted geometric (IULOWG) operator and uncertain LOWG operator are the special cases of the GILOWG operator.
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.249984
0.249984
0.006267
0.001547
0.000031
0
0
0
0
0
0
0
0
0
A parametric representation of linguistic hedges in Zadeh's fuzzy logic This paper proposes a model for the parametric representation of linguistic hedges in Zadeh¿s fuzzy logic. In this model each linguistic truth-value, which is generated from a primary term of the linguistic truth variable, is identified by a real number r depending on the primary term. It is shown that the model yields a method of efficiently computing linguistic truth expressions accompanied with a rich algebraic structure of the linguistic truth domain, namely De Morgan algebra. Also, a fuzzy logic based on the parametric representation of linguistic truth-values is introduced.
Gujarati character recognition using adaptive neuro fuzzy classifier with fuzzy hedges. Recognition of Indian scripts is a challenging problem and work towards development of an OCR for handwritten Gujarati, an Indian script is still in infancy. This paper implements an Adaptive Neuro Fuzzy Classifier (ANFC) for Gujarati character recognition using fuzzy hedges (FHs). FHs are trained with other network parameters by scaled conjugate gradient training algorithm. The tuned fuzzy hedge values of fuzzy sets improve the flexibility of fuzzy sets; this property of FH improves the distinguishability rates of overlapped classes. This work is further extended for feature selection based on FHs. The values of fuzzy hedges can be used to show the importance of degree of fuzzy sets. According to the FH value, the redundant, noisily features can be eliminated, and significant features can be selected. An FH-based feature selection algorithm is implemented using ANFC. This paper aims to demonstrate recognition of ANFC-FH and improved results of the same with feature selection.
Linguistic Hedges for Ant-Generated Rules FRANTIC, a system inspired by insect behaviour for inducing fuzzy IF-THEN rules, is enhanced to produce rules with linguistic hedges. FRANTIC is evaluated against an earlier version of itself and against several other fuzzy rule induction algorithms, and the results are highly encouraging. Rule comprehensibility is maintained while an improvement in the accuracy of the rulebases induced is observed. Equally important, the increase in computation expense due to the improved richness in the hypothesis language is acknowledged, and several ways of resolving this are discussed.
Toward extended fuzzy logic—A first step Fuzzy logic adds to bivalent logic an important capability—a capability to reason precisely with imperfect information. Imperfect information is information which in one or more respects is imprecise, uncertain, incomplete, unreliable, vague or partially true. In fuzzy logic, results of reasoning are expected to be provably valid, or p-valid for short. Extended fuzzy logic adds an equally important capability—a capability to reason imprecisely with imperfect information. This capability comes into play when precise reasoning is infeasible, excessively costly or unneeded. In extended fuzzy logic, p-validity of results is desirable but not required. What is admissible is a mode of reasoning which is fuzzily valid, or f-valid for short. Actually, much of everyday human reasoning is f-valid reasoning.
Adaptive Rule Weights in Neuro-Fuzzy Systems Neuro-fuzzy systems have recently gained a lot of interest in research and application. They are approaches that use learning techniques derived from neural networks to learn fuzzy systems from data. A very simple ad hoc approach to apply a learning algorithm to a fuzzy system is to use adaptive rule weights. In this paper, we argue that rule weights have a negative effect on the linguistic interpretation of a fuzzy system, and thus remove one of the key advantages for applying fuzzy systems. We show how rule weights can be equivalently replaced by modifying the fuzzy sets of a fuzzy system. If this is done, the actual effects that rule weights have on a fuzzy rule base become visible. We demonstrate at a simple example the problems of using rule weights. We suggest that neuro-fuzzy learning should be better implemented by algorithms that modify the fuzzy sets directly without using rule weights.
Fuzzy control on the basis of equality relations with an example from idle speed control The way engineers use fuzzy control in real world applications is often not coherent with an understanding of the control rules as logical statements or implications. In most cases fuzzy control can be seen as an interpolation of a partially specified control function in a vague environment, which reflects the indistinguishability of measurements or control values. In this paper the authors show that equality relations turn out to be the natural way to represent such vague environments and they develop suitable interpolation methods to obtain a control function. As a special case of our approach the authors obtain Mamdani's model and can justify the inference mechanism in this model and the use of triangular membership functions not only for the reason of simplified computations, and they can explain why typical fuzzy partitions are preferred. The authors also obtain a criterion for reasonable defuzzification strategies. The fuzzy control methodology introduced in this paper has been applied successfully in a case study of engine idle speed control for the Volkswagen Golf GTI
General formulation of formal grammars By extracting the basic properties common to the formal grammars appeared in existing literatures, we develop a general formulation of formal grammars. We define a pseudo grammar and derive from it the well-known probabilistic, fuzzy grammars and so on. Moreover, several interesting grammars such as ⊔∗ grammars, ⊔ ⊓ grammars, ⊔ ⊓ grammars, composite B-fuzzy grammars, and mixed fuzzy grammars, which have never appeared in any other papers before, are derived.
The variety generated by the truth value algebra of type-2 fuzzy sets This paper addresses some questions about the variety generated by the algebra of truth values of type-2 fuzzy sets. Its principal result is that this variety is generated by a finite algebra, and in particular is locally finite. This provides an algorithm for determining when an equation holds in this variety. It also sheds light on the question of determining an equational axiomatization of this variety, although this problem remains open.
Fuzzy connection admission control for ATM networks based on possibility distribution of cell loss ratio This paper proposes a connection admission control (CAC) method for asynchronous transfer mode (ATM) networks based on the possibility distribution of cell loss ratio (CLR). The possibility distribution is estimated in a fuzzy inference scheme by using observed data of the CLR. This method makes possible secure CAC, thereby guaranteeing the allowed CLR. First, a fuzzy inference method is proposed, based on a weighted average of fuzzy sets, in order to estimate the possibility distribution of the CLR. In contrast to conventional methods, the proposed inference method can avoid estimating excessively large values of the CLR. Second, the learning algorithm is considered for tuning fuzzy rules for inference. In this, energy functions are derived so as to efficiently achieve higher multiplexing gain by applying them to CAC. Because the upper bound of the CLR can easily be obtained from the possibility distribution by using this algorithm, CAC can be performed guaranteeing the allowed CLR. The simulation studies show that the proposed method can well extract the upper bound of the CLR from the observed data. The proposed method also makes possible self-compensation in real time for the case where the estimated CLR is smaller than the observed CLR. It preserves the guarantee of the CLR as much as possible in operation of ATM switches. Third, a CAC method which uses the fuzzy inference mentioned above is proposed. In the area with no observed CLR data, fuzzy rules are automatically generated from the fuzzy rules already tuned by the learning algorithm with the existing observed CLR data. Such areas exist because of the absence of experience in connections. This method can guarantee the allowed CLR in the CAC and attains a high multiplex gain as is possible. The simulation studies show its feasibility. Finally, this paper concludes with some brief discussions
Statistical leakage estimation based on sequential addition of cell leakage currents This paper presents a novel method for full-chip statistical leakage estimation that considers the impact of process variation. The proposed method considers the correlations among leakage currents in a chip and the state dependence of the leakage current of a cell for an accurate analysis. For an efficient addition of the cell leakage currents, we propose the virtual-cell approximation (VCA), which sums cell leakage currents sequentially by approximating their sum as the leakage current of a single virtual cell while preserving the correlations among leakage currents. By the use of the VCA, the proposed method efficiently calculates a full-chip leakage current. Experimental results using ISCAS benchmarks at various process variation levels showed that the proposed method provides an accurate result by demonstrating average leakage mean and standard deviation errors of 3.12% and 2.22%, respectively, when compared with the results of a Monte Carlo (MC) simulation-based leakage estimation. In efficiency, the proposed method also demonstrated to be 5000 times faster than MC simulation-based leakage estimations and 9000 times faster than the Wilkinson's method-based leakage estimation.
A Fuzzy Linguistic Methodology to Deal With Unbalanced Linguistic Term Sets Many real problems dealing with qualitative aspects use linguistic approaches to assess such aspects. In most of these problems, a uniform and symmetrical distribution of the linguistic term sets for linguistic modeling is assumed. However, there exist problems whose assessments need to be represented by means of unbalanced linguistic term sets, i.e., using term sets that are not uniformly and symmetrically distributed. The use of linguistic variables implies processes of computing with words (CW). Different computational approaches can be found in the literature to accomplish those processes. The 2-tuple fuzzy linguistic representation introduces a computational model that allows the possibility of dealing with linguistic terms in a precise way whenever the linguistic term set is uniformly and symmetrically distributed. In this paper, we present a fuzzy linguistic methodology in order to deal with unbalanced linguistic term sets. To do so, we first develop a representation model for unbalanced linguistic information that uses the concept of linguistic hierarchy as representation basis and afterwards an unbalanced linguistic computational model that uses the 2-tuple fuzzy linguistic computational model to accomplish processes of CW with unbalanced term sets in a precise way and without loss of information.
Improvement of Auto-Regressive Integrated Moving Average models using Fuzzy logic and Artificial Neural Networks (ANNs) Time series forecasting is an active research area that has drawn considerable attention for applications in a variety of areas. Auto-Regressive Integrated Moving Average (ARIMA) models are one of the most important time series models used in financial market forecasting over the past three decades. Recent research activities in time series forecasting indicate that two basic limitations detract from their popularity for financial time series forecasting: (a) ARIMA models assume that future values of a time series have a linear relationship with current and past values as well as with white noise, so approximations by ARIMA models may not be adequate for complex nonlinear problems; and (b) ARIMA models require a large amount of historical data in order to produce accurate results. Both theoretical and empirical findings have suggested that integration of different models can be an effective method of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, ARIMA models are integrated with Artificial Neural Networks (ANNs) and Fuzzy logic in order to overcome the linear and data limitations of ARIMA models, thus obtaining more accurate results. Empirical results of financial markets forecasting indicate that the hybrid models exhibit effectively improved forecasting accuracy so that the model proposed can be used as an alternative to financial market forecasting tools.
Fuzzy Power Command Enhancement in Mobile Communications Systems
Split Bregman iterative algorithm for sparse reconstruction of electrical impedance tomography In this paper, we present an evaluation of the use of split Bregman iterative algorithm for the L"1-norm regularized inverse problem of electrical impedance tomography. Simulations are performed to validate that our algorithm is competitive in terms of the imaging quality and computational speed in comparison with several state-of-the-art algorithms. Results also indicate that in contrast to the conventional L"2-norm regularization method and total variation (TV) regularization method, the L"1-norm regularization method can sharpen the edges and is more robust against data noises.
1.07222
0.083328
0.066667
0.016667
0.008331
0.000339
0.000007
0
0
0
0
0
0
0
Type-2 fuzzy logic systems We introduce a type-2 fuzzy logic system (FLS), which can handle rule uncertainties. The implementation of this type-2 FLS involves the operations of fuzzification, inference, and output processing. We focus on “output processing,” which consists of type reduction and defuzzification. Type-reduction methods are extended versions of type-1 defuzzification methods. Type reduction captures more information about rule uncertainties than does the defuzzified value (a crisp number), however, it is computationally intensive, except for interval type-2 fuzzy sets for which we provide a simple type-reduction computation procedure. We also apply a type-2 FLS to time-varying channel equalization and demonstrate that it provides better performance than a type-1 FLS and nearest neighbor classifier
An Interval Type-2 Fuzzy Logic System To Translate Between Emotion-Related Vocabularies This paper describes a novel experiment that demonstrates the feasiblity of a fuzzy logic (FL) representation of emotion-related words used to translate between different emotional vocabularies. Type-2 fuzzy sets were encoded using input from web-based surveys that prompted users with emotional words and asked them to enter an interval using a double slider. The similarity of the encoded fuzzy sets was computed and it was shown that a reliable [napping can be made between a large vocabulary of emotional words and a smaller vocabulary of words naming seven emotion categories. Though the mapping results are comparable to Euclidian distance in the valence/activation/dominance space, the FL representation has several benefits that are discussed.
Generalized Extended t-Norms as t-Norms of Type 2 This research work focuses on the logical connectives for type 2 fuzzy logics. Especially, the operators which are obtained by extending continuous t-(co)norms to the case of fuzzy truth values by mean of the generalized extension principle are considered. The authors show that these operators named generalized extended t-(co)norms satisfy the definitions of t-(co)norms of type 2.
Control of a nonlinear continuous bioreactor with bifurcation by a type-2 fuzzy logic controller The object of this paper is the application of a type-2 fuzzy logic controller to a nonlinear system that presents bifurcations. A bifurcation can cause instability in the system or can create new working conditions which, although stable, are unacceptable. The only practical solution for an efficient control is the use of high performance controllers that take into account the uncertainties of the process. A type-2 fuzzy logic controller is tested by simulation on a nonlinear bioreactor system that is characterized by a transcritical bifurcation. Simulation results show the validity of the proposed controllers in preventing the system from reaching bifurcation and instable or undesirable stable conditions.
Color image segmentation based on type-2 fuzzy sets and region merging This paper focuses on application of fuzzy sets of type 2 (FS2) in color images segmentation. The proposed approach is based on FS2 entropy application and region merging. Both local and global information of the image are employed and FS2 makes it possible to take into account the total uncertainty inherent to the segmentation operation. Fuzzy entropy is utilized as a tool to perform histogram analysis to find all major homogeneous regions at the first stage. Then a basic and fast region merging process, based on color similarity and reduction of small clusters, is carried out to avoid oversegmentation. The experimental results demonstrate that this method is suitable to find homogeneous regions for natural images, even for noisy images.
Robust control of an LUSM-Based X-Y-θ motion control stage using an adaptive interval type-2 fuzzy neural network The robust control of a linear ultrasonic motor based X-Y-θ motion control stage to track various contours is achieved by using an adaptive interval type-2 fuzzy neural network (AIT2FNN) control system in this study. In the proposed AIT2FNN control system, an IT2FNN, which combines the merits of an interval type-2 fuzzy logic system and a neural network, is developed to approximate an unknown dynamic function. Moreover, adaptive learning algorithms are derived using the Lyapunov stability theorem to train the parameters of the IT2FNN online. Furthermore, a robust compensator is proposed to confront the uncertainties including the approximation error, optimal parameter vectors, and higher order terms inTaylor series. To relax the requirement for the value of lumped uncertainty in the robust compensator, an adaptive lumped uncertainty estimation law is also investigated. In addition, the circle and butterfly contours are planned using a nonuniform rational B-spline curve interpolator. The experimental results show that the contour tracking performance of the proposed AIT2FNN is significantly improved compared with the adaptive type-1 FNN. Additionally, the robustness to parameter variations, external disturbances, cross-coupled interference, and frictional force can also be obtained using the proposed AIT2FNN.
Discrete Interval Type 2 Fuzzy System Models Using Uncertainty in Learning Parameters Fuzzy system modeling (FSM) is one of the most prominent tools that can be used to identify the behavior of highly nonlinear systems with uncertainty. Conventional FSM techniques utilize type 1 fuzzy sets in order to capture the uncertainty in the system. However, since type 1 fuzzy sets express the belongingness of a crisp value x' of a base variable x in a fuzzy set A by a crisp membership value muA(x'), they cannot fully capture the uncertainties due to imprecision in identifying membership functions. Higher types of fuzzy sets can be a remedy to address this issue. Since, the computational complexity of operations on fuzzy sets are increasing with the increasing type of the fuzzy set, the use of type 2 fuzzy sets and linguistic logical connectives drew a considerable amount of attention in the realm of fuzzy system modeling in the last two decades. In this paper, we propose a black-box methodology that can identify robust type 2 Takagi-Sugeno, Mizumoto and Linguistic fuzzy system models with high predictive power. One of the essential problems of type 2 fuzzy system models is computational complexity. In order to remedy this problem, discrete interval valued type 2 fuzzy system models are proposed with type reduction. In the proposed fuzzy system modeling methods, fuzzy C-means (FCM) clustering algorithm is used in order to identify the system structure. The proposed discrete interval valued type 2 fuzzy system models are generated by a learning parameter of FCM, known as the level of membership, and its variation over a specific set of values which generate the uncertainty associated with the system structure
Type 2 Fuzzy Neural Structure for Identification and Control of Time-Varying Plants In industry, most dynamical plants are characterized by unpredictable and hard-to-formulate factors, uncertainty, and fuzziness of information, and as a result, deterministic models usually prove to be insufficient to adequately describe the process. In such situations, the use of fuzzy approaches becomes a viable alternative. However, the systems constructed on the base of type 1 fuzzy systems cannot directly handle the uncertainties associated with information or data in the knowledge base of the process. One possible way to alleviate the problem is to resort to the use of type 2 fuzzy systems. In this paper, the structure of a type 2 Takagi–Sugeno–Kang fuzzy neural system is presented, and its parameter update rule is derived based on fuzzy clustering and gradient learning algorithm. Its performance for identification and control of time-varying as well as some time-invariant plants is evaluated and compared with other approaches seen in the literature. It is seen that the proposed structure is a potential candidate for identification and control purposes of uncertain plants, with the uncertainties being handled adequately by type 2 fuzzy sets.
An integrated quantitative and qualitative FMCDM model for location choices International logistics is a very popular and important issue in the present international supply chain system. In order to reduce the international supply chain operation cost, it is very important for enterprises to invest in the international logistics centers. Although a number of research approaches for solving decision-making problems have been proposed, most of these approaches focused on developing quantitative models for dealing with objective data or qualitative models for dealing with subjective ratings. Few researchers proposed approaches for dealing with both objective data and subjective ratings. Thus, this paper attempts to fill this gap in current literature by establishing an integrated quantitative and qualitative fuzzy multiple criteria decision-making model for dealing with both objective crisp data and subjective fuzzy ratings. Finally, the utilization of the proposed model is demonstrated with a case study on location choices of international distribution centers.
Extension principles for interval-valued intuitionistic fuzzy sets and algebraic operations The Atanassov's intuitionistic fuzzy (IF) set theory has become a popular topic of investigation in the fuzzy set community. However, there is less investigation on the representation of level sets and extension principles for interval-valued intuitionistic fuzzy (IVIF) sets as well as algebraic operations. In this paper, firstly the representation theorem of IVIF sets is proposed by using the concept of level sets. Then, the extension principles of IVIF sets are developed based on the representation theorem. Finally, the addition, subtraction, multiplication and division operations over IVIF sets are defined based on the extension principle. The representation theorem and extension principles as well as algebraic operations form an important part of Atanassov's IF set theory.
Some information measures for interval-valued intuitionistic fuzzy sets A new information entropy measure of interval-valued intuitionistic fuzzy set (IvIFS) is proposed by using membership interval and non-membership interval of IvIFS, which complies with the extended form of Deluca-Termini axioms for fuzzy entropy. Then the cross-entropy of IvIFSs is presented and the relationship between the proposed entropy measures and the existing information measures of IvIFSs is discussed. Additionally, some numerical examples are given to illustrate the applications of the proposed entropy and cross-entropy of IvIFSs to pattern recognition and decision-making.
Statistical ordering of correlated timing quantities and its application for path ranking Correct ordering of timing quantities is essential for both timing analysis and design optimization in the presence of process variation, because timing quantities are no longer a deterministic value, but a distribution. This paper proposes a novel metric, called tiered criticalities, which guarantees to provide a unique order for a set of correlated timing quantities while properly taking into account full process space coverage. Efficient algorithms are developed to compute this metric, and its effectiveness on path ranking for at-speed testing is also demonstrated.
On upper bounds for code distance and covering radius of designs in polynomial metric spaces The purpose of this paper is to present new upper bounds for code distance and covering radius of designs in arbitrary polynomial metric spaces. These bounds and the necessary and sufficient conditions of their attainability were obtained as the solution of an extremal problem for systems of orthogonal polynomials. For antipodal spaces the behaviour of the bounds in different asymptotical processes is determined and it is proved that this bound is attained for all tight 2 k -design.
A New Simulation Technique for Periodic Small-Signal Analysis A new numerical technique for periodic small signal analysis based on harmonic balance method is proposed. Special-purpose numerical procedures based on Krylov subspace methods are developed that reduce the computational efforts of solving linear problems under frequency sweeping. Examples are given to show the efficiency of the new algorithm for computing small signal characteristics for typical RF circuits.
1.002149
0.002607
0.002373
0.002175
0.001995
0.001762
0.001337
0.0009
0.000235
0.000057
0.000002
0
0
0
Uncertainty relations for shift-invariant analog signals The past several years have witnessed a surge of research investigating various aspects of sparse representations and compressed sensing. Most of this work has focused on the finite-dimensional setting in which the goal is to decompose a finite-length vector into a given finite dictionary. Underlying many of these results is the conceptual notion of an uncertainty principle: a signal cannot be sparsely represented in two different bases. Here, we extend these ideas and results to the analog, infinite-dimensional setting by considering signals that lie in a finitely generated shift-invariant (SI) space. This class of signals is rich enough to include many interesting special cases such as multiband signals and splines. By adapting the notion of coherence defined for finite dictionaries to infinite SI representations, we develop an uncertainty principle similar in spirit to its finite counterpart. We demonstrate tightness of our bound by considering a bandlimited lowpass train that achieves the uncertainty principle. Building upon these results and similar work in the finite setting, we show how to find a sparse decomposition in an overcomplete dictionary by solving a convex optimization problem. The distinguishing feature of our approach is the fact that even though the problem is defined over an infinite domain with infinitely many variables and constraints, under certain conditions on the dictionary spectrum our algorithm can find the sparsest representation by solving a finite-dimensional problem.
Uncertainty Relations for Analog Signals In the past several years there has been a surge of research investigating various aspects of sparse representations and compressed sensing. Most of this work has focused on the fi nite-dimensional setting in which the goal is to decompose a finite-length vector into a given finite dictiona ry. Underlying many of these results is the conceptual notion of an uncertainty principle: a signal cannot be spars ely represented in two different bases. Here, we extend these ideas and results to the analog, infinite-dimensional setting by considering signals that lie in a finitely- generated shift-invariant (SI) space. This class of signal s is rich enough to include many interesting special cases such as multiband signals and splines. By adapting the notion of coherence defined for finite dictionaries to infinite SI representations, we develop an uncertainty principle si milar in spirit to its finite counterpart. We demonstrate tightness of our bound by considering a bandlimited low-pass comb that achieves the uncertainty principle. Building upon these results and similar work in the finite setting, we s how how to find a sparse decomposition in an overcomplete dictionary by solving a convex optimization problem. The distinguishing feature of our approach is the fact that even though the problem is defined over an infin ite domain with infinitely many variables and constraints, under certain conditions on the dictionary sp ectrum our algorithm can find the sparsest representation by solving a finite dimensional problem.
Efficient sampling of sparse wideband analog signals Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The current paper proposes an alternative sampling stage that does not require a full-band front end. Instead, signals are captured with an analog front end that consists of a bank of multipliers and lowpass filters whose cutoff is much lower than the Nyquist rate. The problem of recovering the original signal from the low-rate samples can be studied within the framework of compressive sampling. An appropriate parameter selection ensures that the samples uniquely determine the analog input. Moreover, the analog input can be stably reconstructed with digital algorithms. Numerical experiments support the theoretical analysis.
Low Rate Sampling Schemes for Time Delay Estimation Time delay estimation arises in many applications in which a multipath medium has to be identified from pulses transmitted through the channel. Various approaches have been proposed in the literature to identify time delays introduced by multipath environments. However, these methods either operate on the analog received signal, or require high sampling rates in order to achieve reasonable time resolution. In this paper, our goal is to develop a unified approach to time delay estimation from low rate samples of the output of a multipath channel. Our methods result in perfect recovery of the multipath delays from samples of the channel output at the lowest possible rate, even in the presence of overlapping transmitted pulses. This rate depends only on the number of multipath components and the transmission rate, but not on the bandwidth of the probing signal. In addition, our development allows for a variety of different sampling methods. By properly manipulating the low- rate samples, we show that the time delays can be recovered using the well-known ESPRIT algorithm. Combining results from sampling theory with those obtained in the context of direction of arrival estimation methods, we develop necessary and suffic ient conditions on the transmitted pulse and the sampling functions in order to ensure perfect recovery of the channel parameters at the minimal possible rate.
Sampling theorems for signals from the union of finite-dimensional linear subspaces Compressed sensing is an emerging signal acquisition technique that enables signals to be sampled well belowthe Nyquist rate, given that the signal has a sparse representation in an orthonormal basis. In fact, sparsity in an orthonormal basis is only one possible signal model that allows for sampling strategies below the Nyquist rate. In this paper, we consider a more general signal model and assume signals that live on or close to the union of linear subspaces of lowdimension.We present sampling theorems for this model that are in the same spirit as the Nyquist-Shannon sampling theorem in that they connect the number of required samples to certain model parameters. Contrary to the Nyquist-Shannon sampling theorem, which gives a necessary and sufficient condition for the number of required samples as well as a simple linear algorithm for signal reconstruction, the model studied here is more complex. We therefore concentrate on two aspects of the signal model, the existence of one to one maps to lower dimensional observation spaces and the smoothness of the inverse map. We show that almost all linear maps are one to one when the observation space is at least of the same dimension as the largest dimension of the convex hull of the union of any two subspaces in the model. However, we also show that in order for the inverse map to have certain smoothness properties such as a given finite Lipschitz constant, the required observation dimension necessarily depends logarithmically on the number of subspaces in the signal model. In other words, while unique linear sampling schemes require a small number of samples depending only on the dimension of the subspaces involved, in order to have stable sampling methods, the number of samples depends necessarily logarithmically on the number of subspaces in the model. These results are then applied to two examples, the standard compressed sensing signal model in which the signal has a sparse representation in an orthonormal basis and to a sparse signal model with additional tree structure.
A Theory for Sampling Signals From a Union of Subspaces One of the fundamental assumptions in traditional sampling theorems is that the signals to be sampled come from a single vector space (e.g., bandlimited functions). However, in many cases of practical interest the sampled signals actually live in a union of subspaces. Examples include piecewise polynomials, sparse representations, nonuniform splines, signals with unknown spectral support, overlapping echoes with unknown delay and amplitude, and so on. For these signals, traditional sampling schemes based on the single subspace assumption can be either inapplicable or highly inefficient. In this paper, we study a general sampling framework where sampled signals come from a known union of subspaces and the sampling operator is linear. Geometrically, the sampling operator can be viewed as projecting sampled signals into a lower dimensional space, while still preserving all the information. We derive necessary and sufficient conditions for invertible and stable sampling operators in this framework and show that these conditions are applicable in many cases. Furthermore, we find the minimum sampling requirements for several classes of signals, which indicates the power of the framework. The results in this paper can serve as a guideline for designing new algorithms for various applications in signal processing and inverse problems.
Model-Based Compressive Sensing Compressive sensing (CS) is an alternative to Shannon/Nyquist sampling for the acquisition of sparse or compressible signals that can be well approximated by just K ¿ N elements from an N -dimensional basis. Instead of taking periodic samples, CS measures inner products with M < N random vectors and then recovers the signal via a sparsity-seeking optimization or greedy algorithm. Standard CS dictates that robust signal recovery is possible from M = O(K log(N/K)) measurements. It is possible to substantially decrease M without sacrificing robustness by leveraging more realistic signal models that go beyond simple sparsity and compressibility by including structural dependencies between the values and locations of the signal coefficients. This paper introduces a model-based CS theory that parallels the conventional theory and provides concrete guidelines on how to create model-based recovery algorithms with provable performance guarantees. A highlight is the introduction of a new class of structured compressible signals along with a new sufficient condition for robust structured compressible signal recovery that we dub the restricted amplification property, which is the natural counterpart to the restricted isometry property of conventional CS. Two examples integrate two relevant signal models-wavelet trees and block sparsity-into two state-of-the-art CS recovery algorithms and prove that they offer robust recovery from just M = O(K) measurements. Extensive numerical simulations demonstrate the validity and applicability of our new theory and algorithms.
A multiscale framework for Compressive Sensing of video Compressive Sensing (CS) allows the highly efficient acquisition of many signals that could be difficult to capture or encode using conventional methods. From a relatively small number of random measurements, a high-dimensional signal can be recovered if it has a sparse or near-sparse representation in a basis known to the decoder. In this paper, we consider the application of CS to video signals in order to lessen the sensing and compression burdens in single- and multi-camera imaging systems. In standard video compression, motion compensation and estimation techniques have led to improved sparse representations that are more easily compressible; we adapt these techniques for the problem of CS recovery. Using a coarse-to-fine reconstruction algorithm, we alternate between the tasks of motion estimation and motion-compensated wavelet-domain signal recovery. We demonstrate that our algorithm allows the recovery of video sequences from fewer measurements than either frame-by-frame or inter-frame difference recovery methods.
Sparse representation for color image restoration. Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper.
Performance bounds for expander-based compressed sensing in the presence of poisson noise This paper provides performance bounds for compressed sensing in the presence of Poisson noise using expander graphs. The Poisson noise model is appropriate for a variety of applications, including low-light imaging and digital streaming, where the signal-independent and/or bounded noise models used in the compressed sensing literature are no longer applicable. In this paper, we develop a novel sensing paradigm based on expander graphs and propose a MAP algorithm for recovering sparse or compressible signals from Poisson observations. The geometry of the expander graphs and the positivity of the corresponding sensing matrices play a crucial role in establishing the bounds on the signal reconstruction error of the proposed algorithm. The geometry of the expander graphs makes them provably superior to random dense sensing matrices, such as Gaussian or partial Fourier ensembles, for the Poisson noise model. We support our results with experimental demonstrations.
Efficient statistical capacitance variability modeling with orthogonal principle factor analysis Due to the ever-increasing complexity of VLSI designs and IC process technologies, the mismatch between a circuit fabricated on the wafer and the one designed in the layout tool grows ever larger. Therefore, characterizing and modeling process variations of interconnect geometry has become an integral part of analysis and optimization of modern VLSI designs. In this paper, we present a systematic methodology to develop a closed form capacitance model, which accurately captures the nonlinear relationship between parasitic capacitances and dominant global/local process variation parameters. The explicit capacitance representation applies the orthogonal principle factor analysis to greatly reduce the number of random variables associated with modeling conductor surface fluctuations while preserving the dominant sources of variations, and consequently the variational capacitance model can be efficiently utilized by statistical model order reduction and timing analysis tools. Experimental results demonstrate that the proposed method exhibits over 100/spl times/ speedup compared with Monte Carlo simulation while having the advantage of generating explicit variational parasitic capacitance models of high order accuracy.
An approximate analogical reasoning approach based on similarity measures An approximate analogical reasoning schema (AARS) which exhibits the advantages of fuzzy set theory and analogical reasoning in expert systems development is described. The AARS avoids going through the conceptually complicated compositional rule of inference. It uses a similarity measure of fuzzy sets as well as a threshold to determine whether a rule should be fired and a modification function inferred from a similarity measure to deduce a consequent. Some numerical examples to illustrate the operation of the schema are presented. Finally, the proposed schema is compared with conventional expert systems and existing fuzzy expert systems
A group decision making procedure for fuzzy interactive linear assignment programming Managers in the current business environment face with many decision making problems that affect viability of their organization. Decision making tools and evaluation instruments can help the managers to make more accurate decisions. This paper presents a new decision making tool when decision data are not crisp and the decision maker(s) want(s) to rank the alternatives during an interactive process. In this paper we propose an interactive method which uses qualitative data to calculate weights of criteria and rank the selected alternatives. Because of existence of linguistic terms in the decision matrix and the weight of each criterion which can be expressed in trapezoidal fuzzy numbers, an interactive method is proposed for ranking of each alternative with the best weight for each criterion. Using this method, decision makers can provide and modify their preference information gradually within the process of decision making so as to make the decision result more reasonable. A numerical hypothetical example has been illustrated the proposed method.
Analyzing parliamentary elections based on voting advice application data The main goal of this paper is to model the values of Finnish citizens and the members of the parliament. To achieve this goal, two databases are combined: voting advice application data and the results of the parliamentary elections in 2011. First, the data is converted to a high-dimension space. Then, it is projected to two principal components. The projection allows us to visualize the main differences between the parties. The value grids are produced with a kernel density estimation method without explicitly using the questions of the voting advice application. However, we find meaningful interpretations for the axes in the visualizations with the analyzed data. Subsequently, all candidate value grids are weighted by the results of the parliamentary elections. The result can be interpreted as a distribution grid for Finnish voters' values.
1.029419
0.026633
0.023411
0.016577
0.009107
0.007108
0.00127
0.000227
0.000046
0.000001
0
0
0
0
Toward a generalized theory of uncertainty (GTU): an outline It is a deep-seated tradition in science to view uncertainty as a province of probability theory. The generalized theory of uncertainty (GTU) which is outlined in this paper breaks with this tradition and views uncertainty in a much broader perspective.Uncertainty is an attribute of information. A fundamental premise of GTU is that information, whatever its form, may be represented as what is called a generalized constraint. The concept of a generalized constraint is the centerpiece of GTU. In GTU, a probabilistic constraint is viewed as a special-albeit important-instance of a generalized constraint.A generalized constraint is a constraint of the form X isr R, where X is the constrained variable, R is a constraining relation, generally non-bivalent, and r is an indexing variable which identifies the modality of the constraint, that is, its semantics. The principal constraints are: possibilistic (r=blank); probabilistic (r=p); veristic (r=v); usuality (r=u); random set (r=rs); fuzzy graph (r=fg); bimodal (r=bm); and group (r=g). Generalized constraints may be qualified, combined and propagated. The set of all generalized constraints together with rules governing qualification, combination and propagation constitutes the generalized constraint language (GCL).The generalized constraint language plays a key role in GTU by serving as a precisiation language for propositions, commands and questions expressed in a natural language. Thus, in GTU the meaning of a proposition drawn from a natural language is expressed as a generalized constraint. Furthermore, a proposition plays the role of a carrier of information. This is the basis for equating information to a generalized constraint.In GTU, reasoning under uncertainty is treated as propagation of generalized constraints, in the sense that rules of deduction are equated to rules which govern propagation of generalized constraints. A concept which plays a key role in deduction is that of a protoform (abbreviation of prototypical form). Basically, a protoform is an abstracted summary-a summary which serves to identify the deep semantic structure of the object to which it applies. A deduction rule has two parts: symbolic-expressed in terms of protoforms-and computational.GTU represents a significant change both in perspective and direction in dealing with uncertainty and information. The concepts and techniques introduced in this paper are illustrated by a number of examples.
Construction of interval-valued fuzzy entropy invariant by translations and scalings In this paper, we propose a method to construct interval-valued fuzzy entropies (Burillo and Bustince 1996). This method uses special aggregation functions applied to interval-contrasts. In this way, we are able to construct interval-valued fuzzy entropies from automorphisms and implication operators. Finally, we study the invariance of our constructions by scaling and translation.
Numerical solutions of fuzzy differential and integral equations Using the embedding method, numerical procedures for solving fuzzy differential equations (FDEs) and fuzzy integral equations (FIEs) with arbitrary kernels have been investigated. Sufficient conditions for convergence of the proposed algorithms are given and their applicability is illustrated with examples, This work and its conclusions may narrow the gap between the theoretical research on FDEs and FIEs and the practical applications already existing in the design of various fuzzy dynamical systems. (C) 1999 Elsevier Science B.V. All nights reserved.
Evidential reasoning approach for multiattribute decision analysis under both fuzzy and interval uncertainty Many multiple attribute decision analysis (MADA) problems are characterized by both quantitative and qualitative attributes with various types of uncertainties. Incompleteness (or ignorance) and vagueness (or fuzziness) are among the most common uncertainties in decision analysis. The evidential reasoning (ER) and the interval grade ER (IER) approaches have been developed in recent years to support the solution of MADA problems with interval uncertainties and local ignorance in decision analysis. In this paper, the ER approach is enhanced to deal with both interval uncertainty and fuzzy beliefs in assessing alternatives on an attribute. In this newly developed fuzzy IER (FIER) approach, local ignorance and grade fuzziness are modeled under the integrated framework of a distributed fuzzy belief structure, leading to a fuzzy belief decision matrix. A numerical example is provided to illustrate the detailed implementation process of the FIER approach and its validity and applicability.
Sensed Signal Strength Forecasting for Wireless Sensors Using Interval Type-2 Fuzzy Logic System. In this paper, we present a new approach for sensed signal strength forecasting in wireless sensors using interval type-2 fuzzy logic system (FLS). We show that a type-2 fuzzy membership function, i.e., a Gaussian MF with uncertain mean is most appropriate to model the sensed signal strength of wireless sensors. We demonstrate that the sensed signals of wireless sensors are self-similar, which means it can be forecasted. An interval type-2 FLS is designed for sensed signal forecasting and is compared against a type-1 FLS. Simulation results show that the interval type-2 FLS performs much better than the type-1 FLS in sensed signal forecasting. This application can be further used for power on/off control in wireless sensors to save battery energy.
Extension Principle of Interval-Valued Fuzzy Set In this paper, we introduce maximal and minimal extension principles of interval-valued fuzzy set and an axiomatic definition of generalized extension principle of interval-valued fuzzy set and use concepts of cut set of interval valued fuzzy set and interval-valued nested sets to explain their construction procedure in detail. These conclusions can be applied in some fields such as fuzzy algebra, fuzzy analysis and so on.
A comprehensive theory of trichotomous evaluative linguistic expressions In this paper, a logical theory of the, so-called, trichotomous evaluative linguistic expressions (TEv-expressions) is presented. These are frequent expressions of natural language, such as ''small, very small, roughly medium, extremely big'', etc. The theory is developed using the formal system of higher-order fuzzy logic, namely the fuzzy type theory (generalization of classical type theory). First, we discuss informally what are properties of the meaning of TEv-expressions. Then we construct step by step axioms of a formal logical theory T^E^v of TEv-expressions and prove various properties of T^E^v. All the proofs are syntactical and so, our theory is very general. We also outline construction of a canonical model of T^E^v. The main elegancy of our theory consists in the fact that semantics of all kinds of evaluative expressions is modeled in a unified way. We also prove theorems demonstrating that essential properties of the vagueness phenomenon can be captured within our theory.
On the position of intuitionistic fuzzy set theory in the framework of theories modelling imprecision Intuitionistic fuzzy sets [K.T. Atanassov, Intuitionistic fuzzy sets, VII ITKR's Session, Sofia (deposed in Central Science-Technical Library of Bulgarian Academy of Science, 1697/84), 1983 (in Bulgarian)] are an extension of fuzzy set theory in which not only a membership degree is given, but also a non-membership degree, which is more or less independent. Considering the increasing interest in intuitionistic fuzzy sets, it is useful to determine the position of intuitionistic fuzzy set theory in the framework of the different theories modelling imprecision. In this paper we discuss the mathematical relationship between intuitionistic fuzzy sets and other models of imprecision.
Optimistic and pessimistic decision making with dissonance reduction using interval-valued fuzzy sets Interval-valued fuzzy sets have been developed and applied to multiple criteria analysis. However, the influence of optimism and pessimism on subjective judgments and the cognitive dissonance that accompanies the decision making process have not been studied thoroughly. This paper presents a new method to reduce cognitive dissonance and to relate optimism and pessimism in multiple criteria decision analysis in an interval-valued fuzzy decision environment. We utilized optimistic and pessimistic point operators to measure the effects of optimism and pessimism, respectively, and further determined a suitability function through weighted score functions. Considering the two objectives of maximal suitability and dissonance reduction, several optimization models were constructed to obtain the optimal weights for the criteria and to determine the corresponding degree of suitability for alternative rankings. Finally, an empirical study was conducted to validate the feasibility and applicability of the current method. We anticipate that the proposed method can provide insight on the influences of optimism, pessimism, and cognitive dissonance in decision analysis studies.
Intuitionistic fuzzy sets: past, present and future Remarks on history, theory, and appli- cations of intuitionistic fuzzy sets are given. Some open problems are intro- duced.
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System. A novel indirect adaptive backstepping control approach based on type-2 fuzzy system is developed for a class of nonlinear systems. This approach adopts type-2 fuzzy system instead of type-1 fuzzy system to approximate the unknown functions. With type-reduction, the type-2 fuzzy system is replaced by the average of two type-1 fuzzy systems. Ultimately, the adaptive laws, by means of backstepping design technique, will be developed to adjust the parameters to attenuate the approximation error and external disturbance. According to stability theorem, it is proved that the proposed Type-2 Adaptive Backstepping Fuzzy Control (T2ABFC) approach can guarantee global stability of closed-loop system and ensure all the signals bounded. Compared with existing Type-1 Adaptive Backstepping Fuzzy Control (T1ABFC), as the advantages of handling numerical and linguistic uncertainties, T2ABFC has the potential to produce better performances in many respects, such as stability and resistance to disturbances. Finally, a biological simulation example is provided to illustrate the feasibility of control scheme proposed in this paper.
Graphoid properties of qualitative possibilistic independence relations Independence relations play an important role in uncertain reasoning based on Bayesian networks. In particular, they are useful in decomposing joint distributions into more elementary local ones. Recently, in a possibility theory framework, several qualitative independence relations have been proposed, where uncertainty is encoded by means of a complete pre-order between states of the world. This paper studies the well-known graphoid properties of these qualitative independences. Contrary to the probabilistic independence, several qualitative independence relations are not necessarily symmetric. Therefore, we also analyze the symmetric counterparts of graphoid properties (called reverse graphoid properties).
Postsilicon Tuning of Standby Supply Voltage in SRAMs to Reduce Yield Losses Due to Parametric Data-Retention Failures Lowering the supply voltage of static random access memories (SRAMs) during standby modes is an effective technique to reduce their leakage power consumption. To maximize leakage reductions, it is desirable to reduce the supply voltage as much as possible. SRAM cells can retain their data down to a certain voltage, called the data-retention voltage (DRV). Due to intra-die variations in process parameters, the DRV of cells differ within a single memory die. Hence, the minimum applicable standby voltage to a memory die $(V_{\rm DDLmin})$ is determined by the maximum DRV among its constituent cells. On the other hand, inter-die variations result in a die-to-die variation of $V_{\rm DDLmin}$. Applying an identical standby voltage to all dies, regardless of their corresponding $V_{\rm DDLmin}$, can result in the failure of some dies, due to data-retention failures (DRFs), entailing yield losses. In this work, we first show that the yield losses can be significant if the standby voltage of SRAMs is reduced aggressively. Then, we propose a postsilicon standby voltage tuning scheme to avoid the yield losses due to DRFs, while reducing the leakage currents effectively. Simulation results in a 45-nm predictive technology show that tuning standby voltage of SRAMs can enhance data-retention yield by 10%–50%.
Generating realistic stimuli for accurate power grid analysis Power analysis tools are an integral component of any current power sign-off methodology. The performance of a design's power grid affects the timing and functionality of a circuit, directly impacting the overall performance. Ensuring power grid robustness implies taking into account, among others, static and dynamic effects of voltage drop, ground bounce, and electromigration. This type of verification is usually done by simulation, targeting a worst-case scenario where devices, switching almost simultaneously, could impose stern current demands on the power grid. While determination of the exact worst-case switching conditions from the grid perspective is usually not practical, the choice of simulation stimuli has a critical effect on the results of the analysis. Targetting safe but unrealistic settings could lead to pessimistic results and costly overdesigns in terms of die area. In this article we describe a software tool that generates a reasonable, realistic, set of stimuli for simulation. The approach proposed accounts for timing and spatial restrictions that arise from the circuit's netlist and placement and generates an approximation to the worst-case condition. The resulting stimuli indicate that only a fraction of the gates change in any given timing window, leading to a more robust verification methodology, especially in the dynamic case. Generating such stimuli is akin to performing a standard static timing analysis, so the tool fits well within conventional design frameworks. Furthermore, the tool can be used for hotspot detection in early design stages.
1.003193
0.00516
0.004938
0.004938
0.003002
0.00264
0.001702
0.000777
0.000272
0.00005
0.000008
0
0
0
High-Dimensional Centrally Symmetric Polytopes with Neighborliness Proportional to Dimension Let A be a d by n matrix, d < n. Let C be the regular cross polytope (octahedron) in Rn. It has recently been shown that properties of the centrosymmetric polytope P = AC are of interest for finding sparse solutions to the underdetermined system of equations y = Ax; (9). In particular, it is valuable to know that P is centrally k-neighborly. We study the face numbers of randomly-projected cross-polytopes in the proportional- dimensional case where d n , where the projector A is chosen uniformly at random from the Grassmann manifold of d-dimensional orthoprojectors of Rn. We derive N( ) > 0 with the property that, for any < N( ), with overwhelming probability for large d, the number of k-dimensional faces of P = AC is the same as for C, for 0 k d . This implies that P is centrally bd c-neighborly, and its skeleton Skelbd c(P) is combinatorially equivalent to Skelbd c(C). We display graphs of N. Two weaker notions of neighborliness are also important for understanding sparse so- lutions of linear equations: facial neighborliness and sectional neighborliness (9); we study both. The weakest, (k, )-facial neighborliness, asks if the k-faces are all simplicial and if the numbers of k-dimensional faces fk(P) fk(C)(1 ). We characterize and compute the critical proportion F( ) > 0 at which phase transition occurs in k/d. The other, (k, )- sectional neighborliness, asks whether all, except for a small fraction , of the k-dimensional intrinsic sections of P are k-dimensional cross-polytopes. (Intrinsic sections intersect P with k-dimensional subspaces spanned by vertices of P.) We characterize and compute a propor- tion S( ) > 0 guaranteeing this property for k/d < S( ). We display graphs of S and F.
Exact and Approximate Sparse Solutions of Underdetermined Linear Equations In this paper, we empirically investigate the NP-hard problem of finding sparsest solutions to linear equation systems, i.e., solutions with as few nonzeros as possible. This problem has recently received considerable interest in the sparse approximation and signal processing literature. We use a branch-and-cut approach via the maximum feasible subsystem problem to compute optimal solutions for small instances and investigate the uniqueness of the optimal solutions. We furthermore discuss six (modifications of) heuristics for this problem that appear in different parts of the literature. For small instances, the exact optimal solutions allow us to evaluate the quality of the heuristics, while for larger instances we compare their relative performance. One outcome is that the so-called basis pursuit heuristic performs worse, compared to the other methods. Among the best heuristics are a method due to Mangasarian and one due to Chinneck.
A Novel Strategy for Radar Imaging Based on Compressive Sensing Radar data have already proven to be compressible with no significant losses for most of the applications in which it is used. In the framework of information theory, the compressibility of a signal implies that it can be decomposed onto a reduced set of basic elements. Since the same quantity of information is carried by the original signal and its decomposition, it can be deduced that a certain ...
Sparse representation and position prior based face hallucination upon classified over-complete dictionaries In compressed sensing theory, decomposing a signal based upon redundant dictionaries is of considerable interest for data representation in signal processing. The signal is approximated by an over-complete dictionary instead of an orthonormal basis for adaptive sparse image decompositions. Existing sparsity-based super-resolution methods commonly train all atoms to construct only a single dictionary for super-resolution. However, this approach results in low precision of reconstruction. Furthermore, the process of generating such dictionary usually involves a huge computational cost. This paper proposes a sparse representation and position prior based face hallucination method for single face image super-resolution. The high- and low-resolution atoms for the first time are classified to form local dictionaries according to the different regions of human face, instead of generating a single global dictionary. Different local dictionaries are used to hallucinate the corresponding regions of face. The patches of the low-resolution face inputs are approximated respectively by a sparse linear combination of the atoms in the corresponding over-complete dictionaries. The sparse coefficients are then obtained to generate high-resolution data under the constraint of the position prior of face. Experimental results illustrate that the proposed method can hallucinate face images of higher quality with a lower computational cost compared to other existing methods.
Highly robust error correction by convex programming This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x 2 Rn (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g. quantization errors). We show that if one encodes the information as Ax where A 2 Rm◊n (m n) is a suit- able coding matrix, there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occur upon trans- mission (or equivalently as if one has an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second- order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well.
Measurement Matrix Design for Compressive Sensing–Based MIMO Radar In colocated multiple-input multiple-output (MIMO) radar using compressive sensing (CS), a receive node compresses its received signal via a linear transformation, referred to as a measurement matrix. The samples are subsequently forwarded to a fusion center, where an l1-optimization problem is formulated and solved for target information. CS-based MIMO radar exploits target sparsity in the angle-Doppler-range space and thus achieves the high localization performance of traditional MIMO radar but with significantly fewer measurements. The measurement matrix affects the recovery performance. A random Gaussian measurement matrix, typically used in CS problems, does not necessarily result in the best possible detection performance for the basis matrix corresponding to the MIMO radar scenario. This paper considers optimal measurement matrix design with the optimality criterion depending on the coherence of the sensing matrix (CSM) and/or signal-to-interference ratio (SIR). Two approaches are proposed: the first one minimizes a linear combination of CSM and the inverse SIR, and the second one imposes a structure on the measurement matrix and determines the parameters involved so that the SIR is enhanced. Depending on the transmit waveforms, the second approach can significantly improve the SIR, while maintaining a CSM comparable to that of the Gaussian random measurement matrix (GRMM). Simulations indicate that the proposed measurement matrices can improve detection accuracy as compared to a GRMM.
Circulant and Toeplitz matrices in compressed sensing Abstract—Compressed,sensing seeks to recover a sparse vector from a small number,of linear and,non-adaptive measurements.,While most work,so far focuses on Gaussian or Bernoulli random,measurements,we investigate the use of partial random,circulant and Toeplitz matrices in connection,with recovery,by ‘1-minization. In contrast to recent work in this direction we,allow the use of an arbitrary,subset of rows,of a circulant and Toeplitz matrix. Our recovery,result predicts that the necessary,number,of measurements,to ensure sparse reconstruction,by ‘1-minimization with random,partial circulant or Toeplitz matrices scales linearly in the sparsity up to a log-factor in the ambient,dimension. This represents a significant improvement,over previous recovery results for such matrices. As a main,tool for the proofs we use a new,version of the non-commutative,Khintchine inequality.
Iterative Hard Thresholding for Compressed Sensing Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper)•It gives near-optimal error guarantees.•It is robust to observation noise.•It succeeds with a minimum number of observations.•It can be used with any sampling operator for which the operator and its adjoint can be computed.•The memory requirement is linear in the problem size.•Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint.•It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal.•Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
Linear transformations and Restricted Isometry Property The restricted isometry property (RIP) introduced by Candes and Tao is a fundamental property in compressed sensing theory. It says that if a sampling matrix satisfies the RIP of certain order proportional to the sparsity of the signal, then the original signal can be reconstructed even if the sampling matrix provides a sample vector which is much smaller in size than the original signal. This short note addresses the problem of how a linear transformation will affect the RIP. This problem arises from the consideration of extending the sensing matrix and the use of compressed sensing in different bases. As an application, the result is applied to the redundant dictionary setting in compressed sensing.
Analysis and Generalizations of the Linearized Bregman Method This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem whenever its smooth parameter $\alpha$ is greater than a certain value. The analysis is based on showing that the linearized Bregman algorithm is equivalent to gradient descent applied to a certain dual formulation. This result motivates generalizations of the algorithm enabling the use of gradient-based optimization techniques such as line search, Barzilai-Borwein, limited memory BFGS (L-BFGS), nonlinear conjugate gradient, and Nesterov's methods. In the numerical simulations, the two proposed implementations, one using Barzilai-Borwein steps with nonmonotone line search and the other using L-BFGS, gave more accurate solutions in much shorter times than the basic implementation of the linearized Bregman method with a so-called kicking technique.
Parameterized model order reduction via a two-directional Arnoldi process This paper presents a multiparameter moment-matching based model order reduction technique for parameterized interconnect networks via a novel two-directional Arnoldi process. It is referred to as a PIMTAP algorithm, which stands for Parameterized Interconnect Macromodeling algorithm via a Two-directional Arnoldi Process. PIMTAP inherits the advantages of previous multiparameter moment-matching algorithms and avoids their shortfalls. It is numerically stable and adaptive, and preserves the passivity of parameterized RLC networks.
Real-time constrained TCP-compatible rate control for video over the Internet This paper describes a rate control algorithm that captures not only the behavior of TCP's congestion control avoidance mechanism but also the delay constraints of real-time streams. Building upon the TFRC protocol , a new protocol has been designed for estimating the bandwidth prediction model parameters. Making use of RTP and RTCP, this protocol allows to better take into account the multimedia flows characteristics (variable packet size, delay ...). Given the current channel state estimated by the above protocol, encoder and decoder buffers states as well as delay constraints of the real-time video source are translated into encoder rate constraints. This global rate control model, coupled with an H.263+ loss resilient video compression algorithm, has been extensively experimented with on various Internet links. The experiments clearly demonstrate the benefits of 1/ the new protocol used for estimating the bandwidth prediction model parameters, adapted to multimedia flows characteristics, and of 2/ the global rate control model encompassing source buffers and end-to-end delay characteristics. The overall system leads to reduce significantly the source timeouts, hence to minimize the expected distortion, for a comparable usage of the TCP-compatible predicted bandwidth.
Computation of equilibrium measures. We present a new way of computing equilibrium measures numerically, based on the Riemann–Hilbert formulation. For equilibrium measures whose support is a single interval, the simple algorithm consists of a Newton–Raphson iteration where each step only involves fast cosine transforms. The approach is then generalized for multiple intervals.
Using buffered playtime for QoE-oriented resource management of YouTube video streaming. YouTube is the most important online platform for streaming video clips. The popularity and the continuously increasing number of users pose new challenges for Internet service providers. In particular, in access networks where the transmission resources are limited and the providers are interested in reducing their operational expenditure, it is worth to efficiently optimise the network for popular services such as YouTube. In this paper, we propose different resource management mechanisms to improve the quality of experience (QoE) of YouTube users. In particular, we investigate the benefit of cross-layer resource management actions at the client and in the access network for YouTube video streaming. The proposed algorithms are evaluated in a wireless mesh testbed. The results show how to improve the YouTube QoE for the users with the help of client-based or network-based control actions. Copyright (c) 2013 John Wiley & Sons, Ltd.
1.007552
0.007207
0.006558
0.006486
0.005405
0.003243
0.00162
0.000935
0.000142
0.00002
0
0
0
0
Tools for fuzzy random variables: Embeddings and measurabilities The concept of fuzzy random variable has been shown to be as a valuable model for handling fuzzy data in statistical problems. The theory of fuzzy-valued random elements provides a suitable formalization for the management of fuzzy data in the probabilistic setting. A concise overview of fuzzy random variables, focussed on the crucial aspects for data analysis, is presented.
A fuzzy-based methodology for the analysis of diabetic neuropathy A new model for the fuzzy-based analysis of diabetic neuropathy is illustrated, whose pathogenesis so far is not well known. The underlying algebraic structure is a commutative l-monoid, whose support is a set of classifications based on the concept of linguistic variable introduced by Zadeh. The analysis is carried out by means of patient's anagraphical and clinical data, e.g. age, sex, duration of the disease, insulinic needs, severity of diabetes, possible presence of complications. The results obtained by us are identical with medical diagnoses. Moreover, analyzing suitable relevance factors one gets reasonable information about the etiology of the disease, our results agree with most credited clinical hypotheses.
Estimating the expected value of fuzzy random variables in the stratified random sampling from finite populations In this paper, we consider the problem of estimating the expected value of a fuzzy-valued random element in the stratified random sampling from finite populations. To this purpose, we quantify the associated sampling error by means of a generalized measure introduced in a previous paper. We also suggest a way to compare different variates for stratification, as well as to test the adequacy of a certain one.
Bootstrap techniques and fuzzy random variables: Synergy in hypothesis testing with fuzzy data In previous studies we have stated that the well-known bootstrap techniques are a valuable tool in testing statistical hypotheses about the means of fuzzy random variables, when these variables are supposed to take on a finite number of different values and these values being fuzzy subsets of the one-dimensional Euclidean space. In this paper we show that the one-sample method of testing about the mean of a fuzzy random variable can be extended to general ones (more precisely, to those whose range is not necessarily finite and whose values are fuzzy subsets of finite-dimensional Euclidean space). This extension is immediately developed by combining some tools in the literature, namely, bootstrap techniques on Banach spaces, a metric between fuzzy sets based on the support function, and an embedding of the space of fuzzy random variables into a Banach space which is based on the support function.
Generalized theory of uncertainty (GTU)-principal concepts and ideas Uncertainty is an attribute of information. The path-breaking work of Shannon has led to a universal acceptance of the thesis that information is statistical in nature. Concomitantly, existing theories of uncertainty are based on probability theory. The generalized theory of uncertainty (GTU) departs from existing theories in essential ways. First, the thesis that information is statistical in nature is replaced by a much more general thesis that information is a generalized constraint, with statistical uncertainty being a special, albeit important case. Equating information to a generalized constraint is the fundamental thesis of GTU. Second, bivalence is abandoned throughout GTU, and the foundation of GTU is shifted from bivalent logic to fuzzy logic. As a consequence, in GTU everything is or is allowed to be a matter of degree or, equivalently, fuzzy. Concomitantly, all variables are, or are allowed to be granular, with a granule being a clump of values drawn together by a generalized constraint. And third, one of the principal objectives of GTU is achievement of NL-capability, that is, the capability to operate on information described in natural language. NL-capability has high importance because much of human knowledge, including knowledge about probabilities, is described in natural language. NL-capability is the focus of attention in the present paper. The centerpiece of GTU is the concept of a generalized constraint. The concept of a generalized constraint is motivated by the fact that most real-world constraints are elastic rather than rigid, and have a complex structure even when simple in appearance. The paper concludes with examples of computation with uncertain information described in natural language.
Joint propagation of probability and possibility in risk analysis: Towards a formal framework This paper discusses some models of Imprecise Probability Theory obtained by propagating uncertainty in risk analysis when some input parameters are stochastic and perfectly observable, while others are either random or deterministic, but the information about them is partial and is represented by possibility distributions. Our knowledge about the probability of events pertaining to the output of some function of interest from the risk analysis model can be either represented by a fuzzy probability or by a probability interval. It is shown that this interval is the average cut of the fuzzy probability of the event, thus legitimating the propagation method. Besides, several independence assumptions underlying the joint probability-possibility propagation methods are discussed and illustrated by a motivating example.
Fuzzy control as a fuzzy deduction system An approach to fuzzy control based on fuzzy logic in narrow sense (fuzzy inference rules + fuzzy set of logical axioms) is proposed. This gives an interesting theoretical framework and suggests new tools for fuzzy control.
A 2-tuple fuzzy linguistic representation model for computing with words The fuzzy linguistic approach has been applied successfully to many problems. However, there is a limitation of this approach imposed by its information representation model and the computation methods used when fusion processes are performed on linguistic values. This limitation is the loss of information; this loss of information implies a lack of precision in the final results from the fusion of linguistic information. In this paper, we present tools for overcoming this limitation. The linguistic information is expressed by means of 2-tuples, which are composed of a linguistic term and a numeric value assessed in (-0.5, 0.5). This model allows a continuous representation of the linguistic information on its domain, therefore, it can represent any counting of information obtained in a aggregation process. We then develop a computational technique for computing with words without any loss of information. Finally, different classical aggregation operators are extended to deal with the 2-tuple linguistic model
A Social Choice Analysis Of The Borda Rule In A General Linguistic Framework In this paper the Borda rule is extended by allowing the voters to show their preferences among alternatives through linguistic labels. To this aim, we need to add them up for assigning a qualification to each alternative and then to compare such qualifications. Theoretically, all these assessments and comparisons fall into a totally ordered commutative monoid generated by the initial set of linguistic labels. Practically, we show an example which illustrates the suitability of this linguistic approach. Finally, some interesting properties for this Borda rule are proven in the Social Choice context.
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
Evaluating automated and manual acquisition of anaphora resolution strategies We describe one approach to build an automatically trainable anaphora resolution system. In this approach, we use Japanese newspaper articles tagged with discourse information as training examples for a machine learning algorithm which employs the C4.5 decision tree algorithm by Quinlan (Quinlan, 1993). Then, we evaluate and compare the results of several variants of the machine learning-based approach with those of our existing anaphora resolution system which uses manually-designed knowledge sources. Finally, we compare our algorithms with existing theories of anaphora, in particular, Japanese zero pronouns.
On the empirical rate-distortion performance of Compressive Sensing Compressive Sensing (CS) is a new paradigm in signal acquisition and compression that has been attracting the interest of the signal compression community. When it comes to image compression applications, it is relevant to estimate the number of bits required to reach a specific image quality. Although several theoretical results regarding the rate-distortion performance of CS have been published recently, there are not many practical image compression results available. The main goal of this paper is to carry out an empirical analysis of the rate-distortion performance of CS in image compression. We analyze issues such as the minimization algorithm used and the transform employed, as well as the trade-off between number of measurements and quantization error. From the experimental results obtained we highlight the potential and limitations of CS when compared to traditional image compression methods.
Efficient Euclidean projections in linear time We consider the problem of computing the Euclidean projection of a vector of length n onto a closed convex set including the l1 ball and the specialized polyhedra employed in (Shalev-Shwartz & Singer, 2006). These problems have played building block roles in solving several l1-norm based sparse learning problems. Existing methods have a worst-case time complexity of O(n log n). In this paper, we propose to cast both Euclidean projections as root finding problems associated with specific auxiliary functions, which can be solved in linear time via bisection. We further make use of the special structure of the auxiliary functions, and propose an improved bisection algorithm. Empirical studies demonstrate that the proposed algorithms are much more efficient than the competing ones for computing the projections.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.075155
0.084394
0.024407
0.019148
0.01056
0.000241
0.000029
0.000003
0
0
0
0
0
0
Robust-SL0 for stable sparse representation in noisy settings In the last few years, we have witnessed an explosion in applications of sparse representation, the majority of which share the need for finding sparse solutions of underdetermined systems of linear equations (USLE's). Based on recently proposed smoothed ℓ0-norm (SL0), we develop a noise-tolerant algorithm for sparse representation, namely Robust-SL0, enjoying the same computational advantages of SL0, while demonstrating remarkable robustness against noise. The proposed algorithm is developed by adopting the corresponding optimization problem for noisy settings, followed by theoretically-justified approximation to reduce the complexity. Stability properties of Robust-SL0 are rigorously analyzed, both analytically and experimentally, revealing a remarkable improvement in performance over SL0 and other competing algorithms, in the presence of noise.
Sparse representation and learning in visual recognition: Theory and applications Sparse representation and learning has been widely used in computational intelligence, machine learning, computer vision and pattern recognition, etc. Mathematically, solving sparse representation and learning involves seeking the sparsest linear combination of basis functions from an overcomplete dictionary. A rational behind this is the sparse connectivity between nodes in human brain. This paper presents a survey of some recent work on sparse representation, learning and modeling with emphasis on visual recognition. It covers both the theory and application aspects. We first review the sparse representation and learning theory including general sparse representation, structured sparse representation, high-dimensional nonlinear learning, Bayesian compressed sensing, sparse subspace learning, non-negative sparse representation, robust sparse representation, and efficient sparse representation. We then introduce the applications of sparse theory to various visual recognition tasks, including feature representation and selection, dictionary learning, Sparsity Induced Similarity (SIS) measures, sparse coding based classification frameworks, and sparsity-related topics.
Compressive sensing for subsurface imaging using ground penetrating radar The theory of compressive sensing (CS) enables the reconstruction of sparse signals from a small set of non-adaptive linear measurements by solving a convex @?"1 minimization problem. This paper presents a novel data acquisition system for wideband synthetic aperture imaging based on CS by exploiting sparseness of point-like targets in the image space. Instead of measuring sensor returns by sampling at the Nyquist rate, linear projections of the returned signals with random vectors are used as measurements. Furthermore, random sampling along the synthetic aperture scan points can be incorporated into the data acquisition scheme. The required number of CS measurements can be an order of magnitude less than uniform sampling of the space-time data. For the application of underground imaging with ground penetrating radars (GPR), typical images contain only a few targets. Thus we show, using simulated and experimental GPR data, that sparser target space images are obtained which are also less cluttered when compared to standard imaging results.
A fast approach for overcomplete sparse decomposition based on smoothed l0 norm In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include under-determined sparse component analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the l1 norm using linear programming (LP) techniques, our algorithm tries to directly minimize the l0 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.
Compressed sensing of complex-valued data Compressed sensing (CS) is a recently proposed technique that allows the reconstruction of a signal sampled in violation of the traditional Nyquist criterion. It has immediate applications in reduction of acquisition time for measurements, simplification of hardware, reduction of memory space required for data storage, etc. CS has been applied usually by considering real-valued data. However, complex-valued data are very common in practice, such as terahertz (THz) imaging, synthetic aperture radar and sonar, holography, etc. In such cases CS is applied by decoupling real and imaginary parts or using amplitude constraints. Recently, it was shown in the literature that the quality of reconstruction for THz imaging can be improved by applying smoothness constraint on phase as well as amplitude. In this paper, we propose a general l"p minimization recovery algorithm for CS, which can deal with complex data and smooth the amplitude and phase of the data at the same time as well has the additional feature of using a separate sparsity promoting basis such as wavelets. Thus, objects can be better detected from limited noisy measurements, which are useful for surveillance systems.
Compressed Sensing Shape Estimation of Star-Shaped Objects in Fourier Imaging Recent theory of compressed sensing informs us that near-exact recovery of an unknown sparse signal is possible from a very limited number of Fourier samples by solving a convex L1 optimization problem. The main contribution of the present letter is a compressed sensing-based novel nonparametric shape estimation framework and a computational algorithm for binary star shape objects, whose radius fu...
Breakdown of equivalence between the minimal l1-norm solution and the sparsest solution Finding the sparsest solution to a set of underdetermined linear equations is NP-hard in general. However, recent research has shown that for certain systems of linear equations, the sparsest solution (i.e. the solution with the smallest number of nonzeros), is also the solution with minimal l1 norm, and so can be found by a computationally tractable method.For a given n by m matrix Φ defining a system y=Φα, with n making the system underdetermined, this phenomenon holds whenever there exists a 'sufficiently sparse' solution α0. We quantify the 'sufficient sparsity' condition, defining an equivalence breakdown point (EBP): the degree of sparsity of α required to guarantee equivalence to hold; this threshold depends on the matrix Φ.In this paper we study the size of the EBP for 'typical' matrices with unit norm columns (the uniform spherical ensemble (USE)); Donoho showed that for such matrices Φ, the EBP is at least proportional to n. We distinguish three notions of breakdown point--global, local, and individual--and describe a semi-empirical heuristic for predicting the local EBP at this ensemble. Our heuristic identifies a configuration which can cause breakdown, and predicts the level of sparsity required to avoid that situation. In experiments, our heuristic provides upper and lower bounds bracketing the EBP for 'typical' matrices in the USE. For instance, for an n × m matrix Φn,m with m = 2n, our heuristic predicts breakdown of local equivalence when the coefficient vector α has about 30% nonzeros (relative to the reduced dimension n). This figure reliably describes the observed empirical behavior. A rough approximation to the observed breakdown point is provided by the simple formula 0.44 ċ n/log(2m/n).There are many matrix ensembles of interest outside the USE; our heuristic may be useful in speeding up empirical studies of breakdown point at such ensembles. Rather than solving numerous linear programming problems per n, m combination, at least several for each degree of sparsity, the heuristic suggests to conduct a few experiments to measure the driving term of the heuristic and derive predictive bounds. We tested the applicability of this heuristic to three special ensembles of matrices, including the partial Hadamard ensemble and the partial Fourier ensemble, and found that it accurately predicts the sparsity level at which local equivalence breakdown occurs, which is at a lower level than for the USE. A rough approximation to the prediction is provided by the simple formula 0.65 ċ n/log(1 + 10m/n).
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
Duality Theory in Fuzzy Linear Programming Problems with Fuzzy Coefficients The concept of fuzzy scalar (inner) product that will be used in the fuzzy objective and inequality constraints of the fuzzy primal and dual linear programming problems with fuzzy coefficients is proposed in this paper. We also introduce a solution concept that is essentially similar to the notion of Pareto optimal solution in the multiobjective programming problems by imposing a partial ordering on the set of all fuzzy numbers. We then prove the weak and strong duality theorems for fuzzy linear programming problems with fuzzy coefficients.
Interval Type-2 Fuzzy Logic Systems Made Simple To date, because of the computational complexity of using a general type-2 fuzzy set (T2 FS) in a T2 fuzzy logic system (FLS), most people only use an interval T2 FS, the result being an interval T2 FLS (IT2 FLS). Unfortunately, there is a heavy educational burden even to using an IT2 FLS. This burden has to do with first having to learn general T2 FS mathematics, and then specializing it to an IT2 FSs. In retrospect, we believe that requiring a person to use T2 FS mathematics represents a barrier to the use of an IT2 FLS. In this paper, we demonstrate that it is unnecessary to take the route from general T2 FS to IT2 FS, and that all of the results that are needed to implement an IT2 FLS can be obtained using T1 FS mathematics. As such, this paper is a novel tutorial that makes an IT2 FLS much more accessible to all readers of this journal. We can now develop an IT2 FLS in a much more straightforward way
Compressive speech enhancement This paper presents an alternative approach to speech enhancement by using compressed sensing (CS). CS is a new sampling theory, which states that sparse signals can be reconstructed from far fewer measurements than the Nyquist sampling. As such, CS can be exploited to reconstruct only the sparse components (e.g., speech) from the mixture of sparse and non-sparse components (e.g., noise). This is possible because in a time-frequency representation, speech signal is sparse whilst most noise is non-sparse. Derivation shows that on average the signal to noise ratio (SNR) in the compressed domain is greater or equal than the uncompressed domain. Experimental results concur with the derivation and the proposed CS scheme achieves better or similar perceptual evaluation of speech quality (PESQ) scores and segmental SNR compared to other conventional methods in a wide range of input SNR.
Independent systems of representatives in weighted graphs The following conjecture may have never been explicitly stated, but seems to have been floating around: if the vertex set of a graph with maximal degree Δ is partitioned into sets V i of size 2Δ, then there exists a coloring of the graph by 2Δ colors, where each color class meets each V i at precisely one vertex. We shall name it the strong 2Δ-colorability conjecture. We prove a fractional version of this conjecture. For this purpose, we prove a weighted generalization of a theorem of Haxell, on independent systems of representatives (ISR’s). En route, we give a survey of some recent developments in the theory of ISR’s.
Heden's bound on maximal partial spreads We prove Heden's result that the deficiency δ of a maximal partial spread in PG(3, q ) is greater than 1 + ½ (1+√5)√ q unless δ−1 is a multiple of p , where q=p n . When q is odd and not a square, we are able to improve this lower bound to roughly √3 q .
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.071111
0.033333
0.026667
0.008462
0.004444
0.001111
0.000196
0
0
0
0
0
0
0
A practical approach to nonlinear fuzzy regression This paper presents a new method of mathematical modeling in an uncertain environment. The uncertainties of data and model are treated using concepts of fuzzy set theory. The model fitting principle is the minimization of a least squares objective function. A practical modeling procedure is obtained by restricting the type of data and parameter fuzziness to conical membership functions. Under this restriction, the model fitting problem can be solved numerically with the aid of any least squares software for regression with implicit constraint equations. The paper contains a short discussion of the geometry of fuzzy point and function spaces with conical membership functions, and illustrates the application of fuzzy regression with an example from terminal ballistics.
Fuzzy estimates of regression parameters in linear regression models for imprecise input and output data The method for obtaining the fuzzy estimates of regression parameters with the help of "Resolution Identity" in fuzzy sets theory is proposed. The α-level least-squares estimates can be obtained from the usual linear regression model by using the α-level real-valued data of the corresponding fuzzy input and output data. The membership functions of fuzzy estimates of regression parameters will be constructed according to the form of "Resolution Identity" based on the α-level least-squares estimates. In order to obtain the membership degree of any given value taken from the fuzzy estimate, optimization problems have to be solved. Two computational procedures are also provided to solve the optimization problems.
A generalized fuzzy weighted least-squares regression A fairly general fuzzy regression technique is proposed based on the least-squares approach. The main concept is to estimate the modal value and the spreads separately. In order to do this, the interactions between the modal value and the spreads are first analyzed in detail. The advantages of this new fuzzy weighted least-squares regression (FWLSR) approach are: (1) the estimation of both non-interactive and interactive fuzzy parameters can be performed by the same method, (2) the decision-makers' confidence in the gathered data and in the established model can be incorporated into the process, and (3) suspicious outliers (or fuzzy outliers), that is, data points that are obviously and suspiciously lying outside the usual range, can be treated and their effects can be reduced. A numerical example is provided to show that the proposed method can be an effective computational tool in fuzzy regression analysis.
Fuzzy least squares Several models for simple least-squares fitting of fuzzy-valued data are developed. Criteria are given for when fuzzy data sets can be fitted to the models, and analogues of the normal equations are derived.
The s-differentiability of a fuzzy-valued mapping Several approaches to define the differential of fuzzy-valued mappings can be found in the literature. In this paper we introduce a new concept of differential of a fuzzy-valued mapping, which is based on the notion of differential of the support function associated with such a mapping. Properties of the new concept, as well as the relation between it and previous approaches are also analyzed in detail.
The Vienna Definition Language
Outline of a New Approach to the Analysis of Complex Systems and Decision Processes The approach described in this paper represents a substantive departure from the conventional quantitative techniques of system analysis. It has three main distinguishing features: 1) use of so-called ``linguistic'' variables in place of or in addition to numerical variables; 2) characterization of simple relations between variables by fuzzy conditional statements; and 3) characterization of complex relations by fuzzy algorithms. A linguistic variable is defined as a variable whose values are sentences in a natural or artificial language. Thus, if tall, not tall, very tall, very very tall, etc. are values of height, then height is a linguistic variable. Fuzzy conditional statements are expressions of the form IF A THEN B, where A and B have fuzzy meaning, e.g., IF x is small THEN y is large, where small and large are viewed as labels of fuzzy sets. A fuzzy algorithm is an ordered sequence of instructions which may contain fuzzy assignment and conditional statements, e.g., x = very small, IF x is small THEN Y is large. The execution of such instructions is governed by the compositional rule of inference and the rule of the preponderant alternative. By relying on the use of linguistic variables and fuzzy algorithms, the approach provides an approximate and yet effective means of describing the behavior of systems which are too complex or too ill-defined to admit of precise mathematical analysis.
Linguistic Decision-Making Models Using linguistic values to assess results and information about external factors is quite usual in real decision situations. In this article we present a general model for such problems. Utilities are evaluated in a term set of labels and the information is supposed to be a linguistic evidence, that is, is to be represented by a basic assignment of probability (in the sense of Dempster-Shafer) but taking its values on a term set of linguistic likelihoods. Basic decision rules, based on fuzzy risk intervals, are developed and illustrated by several examples. The last section is devoted to analyzing the suitability of considering a hierarchical structure (represented by a tree) for the set of utility labels.
Using Linguistic Incomplete Preference Relations To Cold Start Recommendations Purpose - Analyzing current recommender systems, it is observed that the cold start problem is still too far away to be satisfactorily solved. This paper aims to present a hybrid recommender system which uses a knowledge-based recommendation model to provide good cold start recommendations.Design/methodology/approach - Hybridizing a collaborative system and a knowledge-based system, which uses incomplete preference relations means that the cold start problem is solved. The management of customers' preferences, necessities and perceptions implies uncertainty. To manage such an uncertainty, this information has been modeled by means of the fuzzy linguistic approach.Findings - The use of linguistic information provides flexibility, usability and facilitates the management of uncertainty in the computation of recommendations, and the use of incomplete preference relations in knowledge-based recommender systems improves the performance in those situations when collaborative models do not work properly.Research limitations/implications - Collaborative recommender systems have been successfully applied in many situations, but when the information is scarce such systems do not provide good recommendations.Practical implications - A linguistic hybrid recommendation model to solve the cold start problem and provide good recommendations in any situation is presented and then applied to a recommender system for restaurants.Originality/value - Current recommender systems have limitations in providing successful recommendations mainly related to information scarcity, such as the cold start. The use of incomplete preference relations can improve these limitations, providing successful results in such situations.
Learning with dynamic group sparsity This paper investigates a new learning formulation called dynamic group sparsity. It is a natural extension of the standard sparsity concept in compressive sensing, and is motivated by the observation that in some practical sparse data the nonzero coefficients are often not random but tend to be clustered. Intuitively, better results can be achieved in these cases by reasonably utilizing both clustering and sparsity priors. Motivated by this idea, we have developed a new greedy sparse recovery algorithm, which prunes data residues in the iterative process according to both sparsity and group clustering priors rather than only sparsity as in previous methods. The proposed algorithm can recover stably sparse data with clustering trends using far fewer measurements and computations than current state-of-the-art algorithms with provable guarantees. Moreover, our algorithm can adaptively learn the dynamic group structure and the sparsity number if they are not available in the practical applications. We have applied the algorithm to sparse recovery and background subtraction in videos. Numerous experiments with improved performance over previous methods further validate our theoretical proofs and the effectiveness of the proposed algorithm.
Uncertain probabilities I: the discrete case We consider discrete (finite) probability distributions where some of the probability values are uncertain. We model these uncertainties using fuzzy numbers. Then, employing restricted fuzzy arithmetic, we derive the basic laws of fuzzy (uncertain) probability theory. Applications are to the binomial probability distribution and queuing theory.
A Quadratic Modeling-Based Framework for Accurate Statistical Timing Analysis Considering Correlations The impact of parameter variations on timing due to process variations has become significant in recent years. In this paper, we present a statistical timing analysis (STA) framework with quadratic gate delay models that also captures spatial correlations. Our technique does not make any assumption about the distribution of the parameter variations, gate delays, and arrival times. We propose a Taylor-series expansion-based quadratic representation of gate delays and arrival times which are able to effectively capture the nonlinear dependencies that arise due to increasing parameter variations. In order to reduce the computational complexity introduced due to quadratic modeling during STA, we also propose an efficient linear modeling driven quadratic STA scheme. We ran two sets of experiments assuming the global parameters to have uniform and Gaussian distributions, respectively. On an average, the quadratic STA scheme had 20.5times speedup in runtime as compared to Monte Carlo simulations with an rms error of 0.00135 units between the two timing cummulative density functions (CDFs). The linear modeling driven quadratic STA scheme had 51.5times speedup in runtime as compared to Monte Carlo simulations with an rms error of 0.0015 units between the two CDFs. Our proposed technique is generic and can be applied to arbitrary variations in the underlying parameters under any spatial correlation model
Compressed sensing of astronomical images: orthogonal wavelets domains A simple approach for orthogonal wavelets in compressed sensing (CS) applications is presented. We compare efficient algorithm for different orthogonal wavelet measurement matrices in CS for image processing from scanned photographic plates (SPP). Some important characteristics were obtained for astronomical image processing of SPP. The best orthogonal wavelet choice for measurement matrix construction in CS for image compression of images of SPP is given. The image quality measure for linear and nonlinear image compression method is defined.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.060617
0.066667
0.054458
0.016746
0.000275
0.000013
0.000002
0
0
0
0
0
0
0
Parallel-machine scheduling to minimize makespan with fuzzy processing times and learning effects This paper addresses parallel machine scheduling with learning effects. The objective is to minimize the makespan. To satisfy reality, we consider the processing times as fuzzy numbers. To the best of our knowledge, scheduling with learning effects and fuzzy processing times on parallel machines has never been studied. The possibility measure will be used to rank the fuzzy numbers. Two heuristic algorithms, the simulated annealing algorithm and the genetic algorithm, are proposed. Computational experiments have been conducted to evaluate their performance.
A new approach to similarity and inclusion measures between general type-2 fuzzy sets Interval type-2 fuzzy similarity and inclusion measures have been widely studied. In this paper, the axiomatic definitions of general type-2 fuzzy similarity and inclusion measures are given on the basis of interval type-2 fuzzy similarity and inclusion measures. To improve the shortcomings of the existing general type-2 fuzzy similarity and inclusion measures, we define two new general type-2 fuzzy similarity measures and two new general type-2 fuzzy inclusion measures based on $$\alpha $$ -plane representation theory, respectively, and discuss their related properties. Unlike some existing measures, one of the proposed similarity and inclusion measures are expressed as type-1 fuzzy sets, and therefore these definitions are consistent with the highly uncertain nature of general type-2 fuzzy sets. The theoretical proof is also given to illustrate that the proposed measures are natural extensions of the most popular type-1 fuzzy measures. In the end, the performances of the proposed similarity and inclusion measures are examined.
An interval type-2 fuzzy technique for order preference by similarity to ideal solutions using a likelihood-based comparison approach for multiple criteria decision analysis An interval type-2 fuzzy TOPSIS method is developed to address decision problems.Use of the likelihood-based comparison approach with the approximate ideals.Establishment of likelihood-based closeness coefficients using comparison indices.Multicriteria decision analysis based on interval type-2 trapezoidal fuzzy numbers.Comparative analysis shows the effectiveness and advantages of the proposed method. The technique for order preference by similarity to ideal solutions (TOPSIS) is a well-known compromising method for addressing decision-making problems. In general, incomplete preference information and vague subjective judgments are realistic in practice. Accordingly, the theory of interval type-2 fuzzy sets has received increasing attention in the decision-making field because of its great ability to handle imprecise and ambiguous information in a convenient manner. The purpose of this paper is to develop a novel interval type-2 fuzzy TOPSIS method for multiple criteria decision analysis that is based on interval type-2 trapezoidal fuzzy numbers. This paper introduces the concept of approximate positive-ideal and negative-ideal solutions and presents a simple way to approach the evaluative ratings of ideal solutions using interval type-2 trapezoidal fuzzy numbers. Based on the likelihoods of interval type-2 trapezoidal fuzzy binary relations, this paper proposes certain likelihood-based comparison indices to establish a likelihood-based closeness coefficient of each alternative relative to the approximate ideals. Applying a likelihood-based comparison approach with the approximate ideals, this paper develops the interval type-2 fuzzy TOPSIS procedure to determine the priority ranking orders of the alternatives under consideration of the multiple criteria evaluation/selection. Three practical applications involving landfill site selection, supplier selection, and car evaluation are examined to show the effectiveness and practicability of the proposed method. Furthermore, this paper makes a comparison of the solution results yielded by other interval type-2 fuzzy decision-making methods. The comparative analyses demonstrate that the proposed interval type-2 fuzzy TOPSIS method is easy to implement and produces effective and valid results for solving multiple criteria decision-making problems.
Interval-valued fuzzy permutation method and experimental analysis on cardinal and ordinal evaluations This paper presents interval-valued fuzzy permutation (IVFP) methods for solving multiattribute decision making problems based on interval-valued fuzzy sets. First, we evaluate alternatives according to the achievement levels of attributes, which admits cardinal or ordinal representation. The relative importance of each attribute can also be measured by interval or scalar data. Next, we identify the concordance, midrange concordance, weak concordance, discordance, midrange discordance and weak discordance sets for each ordering. The proposed method consists of testing each possible ranking of the alternatives against all others. The evaluation value of each permutation can be computed either by cardinal weights or by solving programming problems. Then, we choose the permutation with the maximum evaluation value, and the optimal ranking order of alternatives can be obtained. An experimental analysis of IVFP rankings given cardinal and ordinal evaluations is conducted with discussions on consistency rates, contradiction rates, inversion rates, and average Spearman correlation coefficients.
Inclusion and subsethood measure for interval-valued fuzzy sets and for continuous type-2 fuzzy sets The main aim of this paper is to propose new subsethood measures for continuous, general type-2 fuzzy sets. For this purpose, we introduce inclusions and subsethood measures for interval-valued fuzzy sets first. Then, using an @a-plane representation for type-2 fuzzy sets, we extend these inclusions and subsethood measures to general type-2 fuzzy sets. Subsethood measures for interval-valued fuzzy sets (hence, also for type-2 fuzzy sets) rely on already known subsethood measures for ordinary fuzzy sets. We focus on a special subsethood measure for ordinary fuzzy sets, based on @a-cut representation, and show, how to compute subsethood measures for continuous type-2 fuzzy sets with no need for discretizing of universe. This is a very interesting and useful property of proposed subsethood measures, which is one of the reason, why our approach has less computational demand than the others.
An interactive method for multiple criteria group decision analysis based on interval type-2 fuzzy sets and its application to medical decision making The theory of interval type-2 fuzzy sets provides an intuitive and computationally feasible way of addressing uncertain and ambiguous information in decision-making fields. The aim of this paper is to develop an interactive method for handling multiple criteria group decision-making problems, in which information about criterion weights is incompletely (imprecisely or partially) known and the criterion values are expressed as interval type-2 trapezoidal fuzzy numbers. With respect to the relative importance of multiple decision-makers and group consensus of fuzzy opinions, a hybrid averaging approach combining weighted averages and ordered weighted averages was employed to construct the collective decision matrix. An integrated programming model was then established based on the concept of signed distance-based closeness coefficients to determine the importance weights of criteria and the priority ranking of alternatives. Subsequently, an interactive procedure was proposed to modify the model according to the decision-makers' feedback on the degree of satisfaction toward undesirable solution results for the sake of gradually improving the integrated model. The feasibility and applicability of the proposed methods are illustrated with a medical decision-making problem of patient-centered medicine concerning basilar artery occlusion. A comparative analysis with other approaches was performed to validate the effectiveness of the proposed methodology.
First-order incremental block-based statistical timing analysis Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities can then be targeted by manual or automatic optimization methods to improve the robustness of the design. This paper also reports the first incremental statistical timer in the literature which is suitable for use in the inner loop of physical synthesis or other optimization programs. The third novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in CPU time, the probability of each edge or node of the timing graph being critical is computed. Numerical results are presented on industrial ASIC chips with over two million logic gates.
A compressed sensing approach for biological microscopic image processing In fluorescence microscopy the noise level and the photobleaching are cross-dependent problems since reducing exposure time to reduce photobleaching degrades image quality while increasing noise level. These two problems cannot be solved independently as a post-processing task, hence the most important contribution in this work is to a-priori denoise and reduce photobleaching simultaneously by using the Compressed Sensing framework (CS). In this paper, we propose a CS-based denoising framework, based on statistical properties of the CS optimality, noise reconstruction characteristics and signal modeling applied to microscopy images with low signal-tonoise ratio (SNR). Our approach has several advantages over traditional denoising methods, since it can under-sample, recover and denoise images simultaneously. We demonstrate with simulated and practical experiments on fluorescence image data that thanks to CS denoising we can obtain images with similar or increased SNR while still being able to reduce exposition times.
Type-2 fuzzy ontology-based semantic knowledge for collision avoidance of autonomous underwater vehicles. The volume of obstacles encountered in the marine environment is rapidly increasing, which makes the development of collision avoidance systems more challenging. Several fuzzy ontology-based simulators have been proposed to provide a virtual platform for the analysis of maritime missions. However, due to the simulators’ limitations, ontology-based knowledge cannot be utilized to evaluate maritime robot algorithms and to avoid collisions. The existing simulators must be equipped with smart semantic domain knowledge to provide an efficient framework for the decision-making system of AUVs. This article presents type-2 fuzzy ontology-based semantic knowledge (T2FOBSK) and a simulator for marine users that will reduce experimental time and the cost of marine robots and will evaluate algorithms intelligently. The system reformulates the user’s query to extract the positions of AUVs and obstacles and convert them to a proper format for the simulator. The simulator uses semantic knowledge to calculate the degree of collision risk and to avoid obstacles. The available type-1 fuzzy ontology-based approach cannot extract intensively blurred data from the hazy marine environment to offer actual solutions. Therefore, we propose a type-2 fuzzy ontology to provide accurate information about collision risk and the marine environment during real-time marine operations. Moreover, the type-2 fuzzy ontology is designed using Protégé OWL-2 tools. The DL query and SPARQL query are used to evaluate the ontology. The distance to closest point of approach (DCPA), time to closest point of approach (TCPA) and variation of compass degree (VCD) are used to calculate the degree of collision risk between AUVs and obstacles. The experimental and simulation results show that the proposed architecture is highly efficient and highly productive for marine missions and the real-time decision-making system of AUVs.
A framework for understanding human factors in web-based electronic commerce The World Wide Web and email are used increasingly for purchasing and selling products. The use of the internet for these functions represents a significant departure from the standard range of information retrieval and communication tasks for which it has most often been used. Electronic commerce should not be assumed to be information retrieval, it is a separate task-domain, and the software systems that support it should be designed from the perspective of its goals and constraints. At present there are many different approaches to the problem of how to support seller and buyer goals using the internet. They range from standard, hierarchically arranged, hyperlink pages to “electronic sales assistants”, and from text-based pages to 3D virtual environments. In this paper, we briefly introduce the electronic commerce task from the perspective of the buyer, and then review and analyse the technologies. A framework is then proposed to describe the design dimensions of electronic commerce. We illustrate how this framework may be used to generate additional, hypothetical technologies that may be worth further exploration.
Computing With Words for Hierarchical Decision Making Applied to Evaluating a Weapon System The perceptual computer (Per-C) is an architecture that makes subjective judgments by computing with words (CWWs). This paper applies the Per-C to hierarchical decision making, which means decision making based on comparing the performance of competing alternatives, where each alternative is first evaluated based on hierarchical criteria and subcriteria, and then, these alternatives are compared to arrive at either a single winner or a subset of winners. What can make this challenging is that the inputs to the subcriteria and criteria can be numbers, intervals, type-1 fuzzy sets, or even words modeled by interval type-2 fuzzy sets. Novel weighted averages are proposed in this paper as a CWW engine in the Per-C to aggregate these diverse inputs. A missile-evaluation problem is used to illustrate it. The main advantages of our approaches are that diverse inputs can be aggregated, and uncertainties associated with these inputs can be preserved and are propagated into the final evaluation.
Objective Video Quality Assessment Methods: A Classification, Review, and Performance Comparison With the increasing demand for video-based applications, the reliable prediction of video quality has increased in importance. Numerous video quality assessment methods and metrics have been proposed over the past years with varying computational complexity and accuracy. In this paper, we introduce a classification scheme for full-reference and reduced-reference media-layer objective video quality assessment methods. Our classification scheme first classifies a method according to whether natural visual characteristics or perceptual (human visual system) characteristics are considered. We further subclassify natural visual characteristics methods into methods based on natural visual statistics or natural visual features. We subclassify perceptual characteristics methods into frequency or pixel-domain methods. According to our classification scheme, we comprehensively review and compare the media-layer objective video quality models for both standard resolution and high definition video. We find that the natural visual statistics based MultiScale-Structural SIMilarity index (MS-SSIM), the natural visual feature based Video Quality Metric (VQM), and the perceptual spatio-temporal frequency-domain based MOtion-based Video Integrity Evaluation (MOVIE) index give the best performance for the LIVE Video Quality Database.
Dominance-based fuzzy rough set analysis of uncertain and possibilistic data tables In this paper, we propose a dominance-based fuzzy rough set approach for the decision analysis of a preference-ordered uncertain or possibilistic data table, which is comprised of a finite set of objects described by a finite set of criteria. The domains of the criteria may have ordinal properties that express preference scales. In the proposed approach, we first compute the degree of dominance between any two objects based on their imprecise evaluations with respect to each criterion. This results in a valued dominance relation on the universe. Then, we define the degree of adherence to the dominance principle by every pair of objects and the degree of consistency of each object. The consistency degrees of all objects are aggregated to derive the quality of the classification, which we use to define the reducts of a data table. In addition, the upward and downward unions of decision classes are fuzzy subsets of the universe. Thus, the lower and upper approximations of the decision classes based on the valued dominance relation are fuzzy rough sets. By using the lower approximations of the decision classes, we can derive two types of decision rules that can be applied to new decision cases.
Implementing Competitive Learning in a Quantum System Ideas from quantum computation are applied to the field of neural networks to produce competitive learning in a quantum system. The resulting quantum competitive learner has a prototype storage capacity that is exponentially greater than that of its classical counterpart. Further, empirical results from simulation of the quantum competitive learning system on real-world data sets demonstrate the quantum system's potential for excellent performance.
1.2
0.066667
0.066667
0.028571
0.018182
0.013333
0
0
0
0
0
0
0
0
Uniform approximation to Cauchy principal value integrals with logarithmic singularity. An approximation of Clenshaw–Curtis type is given for Cauchy principal value integrals of logarithmically singular functions I(f;c)=−∫−11f(x) (log|x−c|)∕(x−c)dx (c∈(−1,1)) with a given function f. Using a polynomial pN of degree N interpolating f at the Chebyshev nodes we obtain an approximation I(pN;c)≅I(f;c). We expand pN in terms of Chebyshev polynomials with O(NlogN) computations by using the fast Fourier transform. Our method is efficient for smooth functions f, for which pN converges to f fast as N grows, and so simple to implement. This is achieved by exploiting three-term inhomogeneous recurrence relations in three stages to evaluate I(pN;c). For f(z) analytic on the interval [−1,1] in the complex plane z, the error of the approximation I(pN;c) is shown to be bounded uniformly. Using numerical examples we demonstrate the performance of the present method.
On quadrature of highly oscillatory integrals with logarithmic singularities In this paper a quadrature rule is discussed for highly oscillatory integrals with logarithmic singularities. At the same time, its error depends on the frequency ω and the computation of its moments are given. The new rule is implemented by interpolating f at Chebyshev nodes and singular point where the interpolation polynomial satisfies some conditions. Numerical experiments conform the efficiency for obtaining the approximations.
On the convergence rate of Clenshaw-Curtis quadrature for integrals with algebraic endpoint singularities. In this paper, we are concerned with Clenshaw–Curtis quadrature for integrals with algebraic endpoint singularities. An asymptotic error expansion and convergence rate are derived by combining a delicate analysis of the Chebyshev coefficients of functions with algebraic endpoint singularities and the aliasing formula of Chebyshev polynomials. Numerical examples are provided to confirm our analysis.
Computation of integrals with oscillatory singular factors of algebraic and logarithmic type In this paper, we present the Clenshaw-Curtis-Filon methods and the higher order methods for computing many classes of oscillatory integrals with algebraic or logarithmic singularities at the two endpoints of the interval of integration. The methods first require an interpolant of the nonoscillatory and nonsingular parts of the integrands in N + 1 Clenshaw-Curtis points. Then, the required modified moments, can be accurately and efficiently computed by constructing some recurrence relations. Moreover, for these quadrature rules, their absolute errors in inverse powers of the frequency ω , are given. The presented methods share the advantageous property that the accuracy improves greatly, for fixed N , as ω increases. Numerical examples show the accuracy and efficiency of the proposed methods.
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
Parameterized block-based statistical timing analysis with non-Gaussian parameters, nonlinear delay functions Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do not consider explicit gate delay dependence on process parameters (Liou, et al., 2001), (Orshansky, et al., 2002), (Devgan, et al., 2003), (Agarwal, et al., 2003) or restrict analysis to linear Gaussian parameters only (Visweswariah, et al., 2004), (Chang, et al., 2003). Here the authors extended the capabilities of parameterized block-based statistical STA (Visweswariah, et al., 2004) to handle nonlinear function of delays and non-Gaussian parameters, while retaining maximum efficiency of processing linear Gaussian parameters. The novel technique improves accuracy in predicting circuit timing characteristics and retains such benefits of parameterized block-based statistical STA as an incremental mode of operation, computation of criticality probabilities and sensitivities to process parameter variations. The authors' technique was implemented in an industrial statistical timing analysis tool. The experiments with large digital blocks showed both efficiency and accuracy of the proposed technique.
Fuzzy set methods for qualitative and natural language oriented simulation The author discusses the approach of using fuzzy set theory to create a formal way of viewing the qualitative simulation of models whose states, inputs, outputs, and parameters are uncertain. Simulation was performed using detailed and accurate models, and it was shown how input and output trajectories could reflect linguistic (or qualitative) changes in a system. Uncertain variables are encoded using triangular fuzzy numbers, and three distinct fuzzy simulation approaches (Monte Carlo, correlated and uncorrelated) are defined. The methods discussed are also valid for discrete event simulation; experiments have been performed on the fuzzy simulation of a single server queuing model. In addition, an existing C-based simulation toolkit, SimPack, was augmented to include the capabilities for modeling using fuzzy arithmetic and linguistic association, and a C++ class definition was coded for fuzzy number types
A training algorithm for optimal margin classifiers A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
A review on spectrum sensing for cognitive radio: challenges and solutions Cognitive radio is widely expected to be the next Big Bang in wireless communications. Spectrum sensing, that is, detecting the presence of the primary users in a licensed spectrum, is a fundamental problem for cognitive radio. As a result, spectrum sensing has reborn as a very active research area in recent years despite its long history. In this paper, spectrum sensing techniques from the optimal likelihood ratio test to energy detection, matched filtering detection, cyclostationary detection, eigenvalue-based sensing, joint space-time sensing, and robust sensing methods are reviewed. Cooperative spectrum sensing with multiple receivers is also discussed. Special attention is paid to sensing methods that need little prior information on the source signal and the propagation channel. Practical challenges such as noise power uncertainty are discussed and possible solutions are provided. Theoretical analysis on the test statistic distribution and threshold setting is also investigated.
Sensor Selection via Convex Optimization We consider the problem of choosing a set of k sensor measurements, from a set of m possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the (m k) possible choices of sensor measurements is not practical unless m and k are small. In this paper, we describe a heuristic, based on convex optimization, for approximately solving this problem. Our heuristic gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements. There is no guarantee that the gap between the performance of the chosen subset and the performance bound is always small; but numerical experiments suggest that the gap is small in many cases. Our heuristic method requires on the order of m 3 operations; for m= 1000 possible sensors, we can carry out sensor selection in a few seconds on a 2-GHz personal computer.
Random Alpha Pagerank We suggest a revision to the PageRank random surfer model that considers the influence of a population of random surfers on the PageRank vector. In the revised model, each member of the population has its own teleportation parameter chosen from a probability distribution, and consequently, the ranking vector is random. We propose three algorithms for computing the statistics of the random ranking vector based respectively on (i) random sampling, (ii) paths along the links of the underlying graph, and (iii) quadrature formulas. We find that the expectation of the random ranking vector produces similar rankings to its deterministic analogue, but the standard deviation gives uncorrelated information (under a Kendall-tau metric) with myriad potential uses. We examine applications of this model to web spam.
An overview of quality of experience measurement challenges for video applications in IP networks The increase in multimedia content on the Internet has created a renewed interest in quality assessment. There is however a main difference from the traditional quality assessment approaches, as now, the focus relies on the user perceived quality, opposed to the network centered approach classically proposed. In this paper we overview the most relevant challenges to perform Quality of Experience (QoE) assessment in IP networks and highlight the particular considerations necessary when compared to alternative mechanisms, already deployed, such as Quality of Service (QoS). To assist on the handling of such challenges we first discuss the different approaches to Quality of Experience assessment along with the most relevant QoE metrics, and then we discuss how they are used to provide objective results about user satisfaction.
Process variability-aware transient fault modeling and analysis Due to reduction in device feature size and supply voltage, the sensitivity of digital systems to transient faults is increasing dramatically. As technology scales further, the increase in transistor integration capacity also leads to the increase in process and environmental variations. Despite these difficulties, it is expected that systems remain reliable while delivering the required performance. Reliability and variability are emerging as new design challenges, thus pointing to the importance of modeling and analysis of transient faults and variation sources for the purpose of guiding the design process. This work presents a symbolic approach to modeling the effect of transient faults in digital circuits in the presence of variability due to process manufacturing. The results show that using a nominal case and not including variability effects, can underestimate the SER by 5% for the 50% yield point and by 10% for the 90% yield point.
Implementing Competitive Learning in a Quantum System Ideas from quantum computation are applied to the field of neural networks to produce competitive learning in a quantum system. The resulting quantum competitive learner has a prototype storage capacity that is exponentially greater than that of its classical counterpart. Further, empirical results from simulation of the quantum competitive learning system on real-world data sets demonstrate the quantum system's potential for excellent performance.
1.2
0.2
0.1
0.05
0
0
0
0
0
0
0
0
0
0
Ant colony optimization for QoE-centric flow routing in software-defined networks We present design, implementation, and an evaluation of an ant colony optimization (ACO) approach to flow routing in software-defined networking (SDN) environments. While exploiting a global network view and configuration flexibility provided by SDN, the approach also utilizes quality of experience (QoE) estimation models and seeks to maximize the user QoE for multimedia services. As network metrics (e.g., packet loss) influence QoE for such services differently, based on the service type and its integral media flows, the goal of our ACO-based heuristic algorithm is to calculate QoE-aware paths that conform to traffic demands and network limitations. A Java implementation of the algorithm is integrated into SDN controller OpenDaylight so as to program the path selections. The evaluation results indicate promising QoE improvements of our approach over shortest path routing, as well as low running time.
A survey on QoE-oriented wireless resources scheduling Future wireless systems are expected to provide a wide range of services to more and more users. Advanced scheduling strategies thus arise not only to perform efficient radio resource management, but also to provide fairness among the users. On the other hand, the users’ perceived quality, i.e., Quality of Experience (QoE), is becoming one of the main drivers within the schedulers design. In this context, this paper starts by providing a comprehension of what is QoE and an overview of the evolution of wireless scheduling techniques. Afterwards, a survey on the most recent QoE-based scheduling strategies for wireless systems is presented, highlighting the application/service of the different approaches reported in the literature, as well as the parameters that were taken into account for QoE optimization. Therefore, this paper aims at helping readers interested in learning the basic concepts of QoE-oriented wireless resources scheduling, as well as getting in touch with its current research frontier.
Cross-layer QoE-driven admission control and resource allocation for adaptive multimedia services in LTE. This paper proposes novel resource management mechanisms for multimedia services in 3GPP Long Term Evolution (LTE) networks aimed at enhancing session establishment success and network resources management, while maintaining acceptable end-user quality of experience (QoE) levels. We focus on two aspects, namely admission control mechanisms and resource allocation. Our cross-layer approach relies on application-level user- and service-related knowledge exchanged at session initiation time, whereby different feasible service configurations corresponding to different quality levels and resource requirements can be negotiated and passed on to network-level resource management mechanisms. We propose an admission control algorithm which admits sessions by considering multiple feasible configurations of a given service, and compare it with a baseline algorithm that considers only single service configurations, which is further related to other state-of-the-art algorithms. Our results show that admission probability can be increased in light of admitting less resource-demanding configurations in cases where resource restrictions prevent admission of a session at the highest quality level. Additionally, in case of reduced resource availability, we consider resource reallocation mechanisms based on controlled session quality degradation while maintaining user QoE above the acceptable threshold. Simulation results have shown that given a wireless access network with limited resources, our approach leads to increased session establishment success (i.e., fewer sessions are blocked) while maintaining acceptable user-perceived quality levels.
OTT-ISP joint service management: A Customer Lifetime Value based approach. In this work, we propose a QoE-aware collaboration approach between Over-The-Top providers (OTT) and Internet Service Providers (ISP) based on the maximization of the profit by considering the user churn of Most Profitable Customers (MPCs), which are classified in terms of the Customer Lifetime Value (CLV). The contribution of this work is multifold. Firstly, we investigate the different perspectives of ISPs and OTTs regarding QoE management and why they should collaborate. Secondly, we investigate the current ongoing collaboration scenarios in the multimedia industry. Thirdly, we propose the QoE-aware collaboration framework based on the CLV, which includes the interfaces for information sharing between OTTs and ISPs and the use of Content Delivery Networks (CDN) and surrogate servers. Finally, we provide simulation results aiming at demonstrating the higher profit is achieved when collaboration is introduced, by engaging more MPCs with respect to current solutions.
To Each According To His Needs: Dimensioning Video Buffer For Specific User Profiles And Behavior Today's video streaming platforms offer videos in a variety of quality settings in order to attract as many users as possible. But even though a sufficiently dimensioned network can not always be provided for the best experience, users are asking for high QoE. Users consume the content of a video streaming platform in different ways, while video delivery platforms currently do not account for these scenarios and thus ensure at best mediocre QoE. In this paper, we develop a queuing model and provide a mean-value analysis to investigate the impact of user profiles on the QoE of HTTP Video Streaming for typical user scenarios. Our results show that the user profile and particularly the scenario have to be respected when dimensioning the buffer. Further, we present recommendations on how to adapt player parameters in order to optimize the QoE for individual users profiles and viewing habits. The provided model leads to relevant insights that are required to build a system that guarantees each user the best attainable QoE.
The memory effect and its implications on Web QoE modeling Quality of Experience (QoE) has gained enormous attention during the recent years. So far, most of the existing QoE research has focused on audio and video streaming applications, although HTTP traffic carries the majority of traffic in the residential broadband Internet. However, existing QoE models for this domain do not consider temporal dynamics or historical experiences of the user's satisfaction while consuming a certain service. This psychological influence factor of past experience is referred to as the memory effect. The first contribution of this paper is the identification of the memory effect as a key influence factor for Web QoE modeling based on subjective user studies. As second contribution, three different QoE models are proposed which consider the implications of the memory effect and imply the required extensions of the basic models. The proposed Web QoE models are described with a) support vector machines, b) iterative exponential regressions, and c) two-dimensional hidden Markov models.
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
A method based on PSO and granular computing of linguistic information to solve group decision making problems defined in heterogeneous contexts. •Information granulation of linguistic information used in group decision making.•Granular Computing is used to made operational the linguistic information.•Linguistic information expressed in terms of information granules defined as sets.•The granulation of the linguistic terms is formulated as an optimization problem.•The distribution and semantics of the linguistic terms are not assumed a priori.
Incremental criticality and yield gradients Criticality and yield gradients are two crucial diagnostic metrics obtained from Statistical Static Timing Analysis (SSTA). They provide valuable information to guide timing optimization and timing-driven physical synthesis. Existing work in the literature, however, computes both metrics in a non-incremental manner, i.e., after one or more changes are made in a previously-timed circuit, both metrics need to be recomputed from scratch, which is obviously undesirable for optimizing large circuits. The major contribution of this paper is to propose two novel techniques to compute both criticality and yield gradients efficiently and incrementally. In addition, while node and edge criticalities are addressed in the literature, this paper for the first time describes a technique to compute path criticalities. To further improve algorithmic efficiency, this paper also proposes a novel technique to update "chip slack" incrementally. Numerical results show our methods to be over two orders of magnitude faster than previous work.
Mono-multi bipartite Ramsey numbers, designs, and matrices Eroh and Oellermann defined BRR(G1, G2) as the smallest N such that any edge coloring of the complete bipartite graph KN, N contains either a monochromatic G1 or a multicolored G2. We restate the problem of determining BRR(K1,λ, Kr,s) in matrix form and prove estimates and exact values for several choices of the parameters. Our general bound uses Füredi's result on fractional matchings of uniform hypergraphs and we show that it is sharp if certain block designs exist. We obtain two sharp results for the case r = s = 2: we prove BRR(K1,λ, K2,2) = 3λ - 2 and that the smallest n for which any edge coloring of Kλ,n contains either a monochromatic K1,λ or a multicolored K2,2 is λ2.
Near-Optimal Sparse Recovery in the L1 Norm We consider the *approximate sparse recovery problem*, where the goal is to (approximately) recover a high-dimensional vector x from Rn from its lower-dimensional *sketch* Ax from Rm.Specifically, we focus on the sparse recovery problem in the L1 norm: for a parameter k, given the sketch Ax, compute an approximation x' of x such that the L1 approximation error | |x-x'| | is close to minimum of | |x-x*| | over all vectors x* with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years.Many solutions to this problem have been discovered, achieving different trade-offs between various attributes, such as the sketch length, encoding and recovery times.In this paper we provide a sparse recovery scheme which achieves close to optimal performance on virtually all attributes. In particular, this is the first recovery scheme that guarantees k log(n/k) sketch length, and near-linear n log (n/k) recovery time *simultaneously*. It also features low encoding and update times, and is noise-resilient.
Image smoothing via L0 gradient minimization We present a new image editing method, particularly effective for sharpening major edges by increasing the steepness of transition while eliminating a manageable degree of low-amplitude structures. The seemingly contradictive effect is achieved in an optimization framework making use of L0 gradient minimization, which can globally control how many non-zero gradients are resulted in to approximate prominent structure in a sparsity-control manner. Unlike other edge-preserving smoothing approaches, our method does not depend on local features, but instead globally locates important edges. It, as a fundamental tool, finds many applications and is particularly beneficial to edge extraction, clip-art JPEG artifact removal, and non-photorealistic effect generation.
Stochastic Behavioral Modeling and Analysis for Analog/Mixed-Signal Circuits It has become increasingly challenging to model the stochastic behavior of analog/mixed-signal (AMS) circuits under large-scale process variations. In this paper, a novel moment-matching-based method has been proposed to accurately extract the probabilistic behavioral distributions of AMS circuits. This method first utilizes Latin hypercube sampling coupling with a correlation control technique to generate a few samples (e.g., sample size is linear with number of variable parameters) and further analytically evaluate the high-order moments of the circuit behavior with high accuracy. In this way, the arbitrary probabilistic distributions of the circuit behavior can be extracted using moment-matching method. More importantly, the proposed method has been successfully applied to high-dimensional problems with linear complexity. The experiments demonstrate that the proposed method can provide up to 1666X speedup over crude Monte Carlo method for the same accuracy.
1.22
0.22
0.11
0.11
0.055
0.01375
0
0
0
0
0
0
0
0
View Synthesis for Advanced 3D Video Systems Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD) is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.
Shape-adaptivewavelet encoding of depth maps We present a novel depth-map codec aimed at free-viewpoint 3DTV. The proposed codec relies on a shape-adaptive wavelet transform and an explicit representation of the locations of major depth edges. Unlike classical wavelet transforms, the shape-adaptive transform generates small wavelet coefficients along depth edges, which greatly reduces the data entropy. The wavelet transform is implemented by shape-adaptive lifting, which enables fast computations and perfect reconstruction. We also develop a novel rate-constrained edge detection algorithm, which integrates the idea of significance bitplanes into the Canny edge detector. Along with a simple chain code, it provides an efficient way to extract and encode edges. Experimental results on synthetic and real data confirm the effectiveness of the proposed algorithm, with PSNR gains of 5 dB and more over the Middlebury dataset.
Depth Reconstruction Filter and Down/Up Sampling for Depth Coding in 3-D Video A depth image represents three-dimensional (3-D) scene information and is commonly used for depth image-based rendering (DIBR) to support 3-D video and free-viewpoint video applications. The virtual view is generally rendered by the DIBR technique and its quality depends highly on the quality of depth image. Thus, efficient depth coding is crucial to realize the 3-D video system. In this letter, w...
H.264-Based depth map sequence coding using motion information of corresponding texture video Three-dimensional television systems using depth-image-based rendering techniques are attractive in recent years. In those systems, a monoscopic two-dimensional texture video and its associated depth map sequence are transmitted. In order to utilize transmission bandwidth and storage space efficiently, the depth map sequence should be compressed as well as the texture video. Among previous works for depth map sequence coding, H.264 has shown the best performance; however, it has some disadvantages of requiring long encoding time and high encoder cost. In this paper, we propose a new coding structure for depth map coding with H.264 so as to reduce encoding time significantly while maintaining high compression efficiency. Instead of estimating motion vectors directly in the depth map, we generate candidate motion modes by exploiting motion information of the corresponding texture video. Experimental results show that the proposed algorithm reduces the complexity to 60% of the previous scheme that encodes two sequences separately and coding performance is also improved up to 1dB at low bit rates.
3d Video Coding Using The Synthesized View Distortion Change In 3D video, texture and supplementary depth data are coded to enable the interpolation of a required number of synthesized views for multi-view displays in the range of the original camera views. The coding of the depth data can be improved by analyzing the distortion of synthesized video views instead of the depth map distortion. Therefore, this paper introduces a new distortion metric for 3D video coding, which relates changes in the depth map directly to changes of the overall synthesized view distortion. It is shown how the new metric can be integrated into the rate-distortion optimization (RDO) process of an encoder, that is based on high-efficiency video coding technology. An evaluation of the modified encoder is conducted using different view synthesis algorithms and shows about 50% rate savings for the depth data or 0.6 dB PSNR gains for the synthesized view.
High-quality video view interpolation using a layered representation The ability to interactively control viewpoint while watching a video is an exciting application of image-based rendering. The goal of our work is to render dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this paper, we show how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation.In our approach, we first use a novel color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a novel temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates.
Joint Texture And Depth Map Video Coding Based On The Scalable Extension Of H.264/Avc Depth-Image-Based Rendering (DIBR) is widely used for view synthesis in 3D video applications. Compared with traditional 2D video applications, both the texture video and its associated depth map are required for transmission in a communication system that supports DIBR. To efficiently utilize limited bandwidth, coding algorithms, e.g. the Advanced Video Coding (H.264/AVC) standard, can be adopted to compress the depth map using the 4:0:0 chroma sampling format. However, when the correlation between texture video and depth map is exploited, the compression efficiency may be improved compared with encoding them independently using H.264/AVC. A new encoder algorithm which employs Scalable Video Coding (SVC), the scalable extension of H.264/AVC, to compress the texture video and its associated depth map is proposed in this paper. Experimental results show that the proposed algorithm can provide up to 0.97 dB gain for the coded depth maps, compared with the simulcast scheme, wherein texture video and depth map are coded independently by H.264/AVC.
Real-time scalable hardware architecture for 3D-HEVC bipartition modes. This article presents a real-time scalable hardware architecture for the bipartition modes of 3D high-efficiency video coding (3D-HEVC) standard, which includes the depth modeling modes 1 (DMM-1) and 4 (DMM-4). A simplification of the DMM-1 algorithm was done, removing the refinement step. This simplification causes a small BD-rate increase (0.09 %) with the advantage of better using our hardware resources, reducing the necessary memory required for storing all DMM-1 wedgelet patterns by 30 %. The scalable architecture can be configured to support all the different block sizes supported by the 3D-HEVC and also to reach different throughputs, according to the application requirements. Then, the proposed solution can be efficiently used for several encoding scenarios and many different applications. Synthesis results considering a test case show that the designed architecture is capable of processing HD 1080p videos in real time, but with other configurations, higher resolutions are also possible to be processed.
Network emulation in the VINT/NS simulator Employing an emulation capability in network simulation provides the ability for real-world traffic to interact with a simulation. The benefits of emulation include the ability to expose experimental algorithms and protocols to live traffic loads, and to test real-world protocol implementations against repeatable interference generated in simulation. This paper describes the design and implementation of the emulation facility in the NS simulator a commonly-used publicly available network research simulator
Weighted Superimposed Codes and Constrained Integer Compressed Sensing We introduce a new family of codes, termed weighted superimposed codes (WSCs). This family generalizes the class of Euclidean superimposed codes (ESCs), used in multiuser identification systems. WSCs allow for discriminating all bounded, integer-valued linear combinations of real-valued codewords that satisfy prescribed norm and nonnegativity constraints. By design, WSCs are inherently noise tolerant. Therefore, these codes can be seen as special instances of robust compressed sensing schemes. The main results of the paper are lower and upper bounds on the largest achievable code rates of several classes of WSCs. These bounds suggest that, with the codeword and weighting vector constraints at hand, one can improve the code rates achievable by standard compressive sensing techniques.
Which logic is the real fuzzy logic? This paper is a contribution to the discussion of the problem, whether there is a fuzzy logic that can be considered as the real fuzzy logic. We give reasons for taking IMTL, BL, L@P and Ev"L (fuzzy logic with evaluated syntax) as those fuzzy logics that should be indeed taken as the real fuzzy logics.
Statistical timing analysis under spatial correlations Process variations are of increasing concern in today's technologies, and they can significantly affect circuit performance. An efficient statistical timing analysis algorithm that predicts the probability distribution of the circuit delay considering both inter-die and intra-die variations, while accounting for the effects of spatial correlations of intra-die parameter variations, is presented. The procedure uses a first-order Taylor series expansion to approximate the gate and interconnect delays. Next, principal component analysis (PCA) techniques are employed to transform the set of correlated parameters into an uncorrelated set. The statistical timing computation is then easily performed with a program evaluation and review technique (PERT)-like circuit graph traversal. The run time of this algorithm is linear in the number of gates and interconnects, as well as the number of varying parameters and grid partitions that are used to model spatial correlations. The accuracy of the method is verified with Monte Carlo (MC) simulation. On average, for the 100 nm technology, the errors of mean and standard deviation (SD) values computed by the proposed method are 1.06% and -4.34%, respectively, and the errors of predicting the 99% and 1% confidence point are -2.46% and -0.99%, respectively. A testcase with about 17 800 gates was solved in about 500 s, with high accuracy as compared to an MC simulation that required more than 15 h.
A fuzzy MCDM method for solving marine transshipment container port selection problems “Transshipment” is a very popular and important issue in the present international trade container transportation market. In order to reduce the international trade container transportation operation cost, it is very important for shipping companies to choose the best transshipment container port. The aim of this paper is to present a new Fuzzy Multiple Criteria Decision Making Method (FMCDM) for solving the transshipment container port selection problem under fuzzy environment. In this paper we present first the canonical representation of multiplication operation on three fuzzy numbers, and then this canonical representation is applied to the selection of transshipment container port. Based on the canonical representation, the decision maker of shipping company can determine quickly the ranking order of all candidate transshipment container ports and select easily the best one.
Bounding the Dynamic Behavior of an Uncertain System via Polynomial Chaos-based Simulation Parametric uncertainty can represent parametric tolerance, parameter noise or parameter disturbances. The effects of these uncertainties on the time evolution of a system can be extremely significant, mostly when studying closed-loop operation of control systems. The presence of uncertainty makes the modeling process challenging, since it is impossible to express the behavior of the system with a deterministic approach. If the uncertainties can be defined in terms of probability density function, probabilistic approaches can be adopted. In many cases, the most useful aspect is the evaluation of the worst-case scenario, thus limiting the problem to the evaluation of the boundary of the set of solutions. This is particularly true for the analysis of robust stability and performance of a closed-loop system. The goal of this paper is to demonstrate how the polynomial chaos theory (PCT) can simplify the determination of the worst-case scenario, quickly providing the boundaries in time domain. The proposed approach is documented with examples and with the description of the Maple worksheet developed by the authors for the automatic processing in the PCT framework.
1.04192
0.04042
0.030317
0.020271
0.00948
0.004167
0.000392
0.000104
0.000006
0
0
0
0
0
Dummynet: a simple approach to the evaluation of network protocols Network protocols are usually tested in operational networks or in simulated environments. With the former approach it is not easy to set and control the various operational parameters such as bandwidth, delays, queue sizes. Simulators are easier to control, but they are often only an approximate model of the desired setting, especially for what regards the various traffic generators (both producers and consumers) and their interaction with the protocol itself.In this paper we show how a simple, yet flexible and accurate network simulator - dummynet - can be built with minimal modifications to an existing protocol stack, allowing experiments to be run on a standalone system. dummynet works by intercepting communications of the protocol layer under test and simulating the effects of finite queues, bandwidth limitations and communication delays. It runs in a fully operational system, hence allowing the use of real traffic generators and protocol implementations, while solving the problem of simulating unusual environments. With our tool, doing experiments with network protocols is as simple as running the desired set of applications on a workstation.A FreeBSD implementation of dummynet, targeted to TCP, is available from the author. This implementation is highly portable and compatible with other BSD-derived systems, and takes less than 300 lines of kernel code.
ENDE: An End-to-end Network Delay Emulator Tool for Multimedia Protocol Development Multimedia applications and protocols are constantly being developed to run over the Internet. A new protocol or application after being developed has to be tested on the real Internet or simulated on a testbed for debugging and performance evaluation. In this paper, we present a novel tool, ENDE, that can emulate end-to-end delays between two hosts without requiring access to the second host. The tool enables the user to test new multimedia protocols realistically on a single machine. In a delay-observing mode, ENDE can generate accurate traces of one-way delays between two hosts on the network. In a delay-impacting mode, ENDE can be used to simulate the functioning of a protocol or an application as if it were running on the network. We will show that ENDE allows accurate estimation of one-way transit times and hence can be used even when the forward and reverse paths are asymmetric between the two hosts. Experimental results are also presented to show that ENDE is fairly accurate in the delay-impacting mode.
Scalability and accuracy in a large-scale network emulator This paper presents ModelNet, a scalable Internet emulation environment that enables researchers to deploy unmodified software prototypes in a configurable Internet-like environment and subject them to faults and varying network conditions. Edge nodes running user-specified OS and application software are configured to route their packets through a set of ModelNet core nodes, which cooperate to subject the traffic to the bandwidth, congestion constraints, latency, and loss profile of a target network topology.This paper describes and evaluates the ModelNet architecture and its implementation, including novel techniques to balance emulation accuracy against scalability. The current ModelNet prototype is able to accurately subject thousands of instances of a distrbuted application to Internet-like conditions with gigabits of bisection bandwidth. Experiments with several large-scale distributed services demonstrate the generality and effectiveness of the infrastructure.
Measurement and analysis of single-hop delay on an IP backbone network We measure and analyze the single-hop packet delay through operational routers in the Sprint Internet protocol (IP) backbone network. After presenting our delay measurements through a single router for OC-3 and OC-12 link speeds, we propose a methodology to identify the factors contributing to single-hop delay. In addition to packet processing, transmission, and queueing delay at the output link, we observe the presence of very large delays that cannot be explained within the context of a first-in first-out output queue model. We isolate and analyze these outliers. Results indicate that there is very little queueing taking place in Sprint's backbone. As link speeds increase, transmission delay decreases and the dominant part of single-hop delay is packet processing time. We show that if a packet is received and transmitted on the same linecard, it experiences less than 20 μs of delay. If the packet is transmitted across the switch fabric, its delay doubles in magnitude. We observe that processing due to IP options results in single-hop delays in the order of milliseconds. Milliseconds of delay may also be experienced by packets that do not carry IP options. We attribute those delays to router idiosyncratic behavior that affects less than 1% of the packets. Finally, we show that the queueing delay distribution is long-tailed and can be approximated with a Weibull distribution with the scale parameter a=0.5 and the shape parameter b=0.6 to 0.82.
A Measurement-Based Modeling Approach for Network-Induced Packet Delay An approach is presented to capture and model Internet end-to-end packet delay behavior using ARMA and ARIMA models. Autocorrelation (ACF) and Partial Autocorrelation (PACF) functions are used to identify the most appropriate model and the model order. Impact due to sending rate and packet size of the probe, and the available link capacity on these two metrics are investigated. Results indicate that the models presented reflect accurately the effect of packet correlation induced by the network. Modeling Inter-Packet Gap (IPG) is an alternative for capturing the effect of the network on a packet stream. A methodology for fitting ARMA and ARIMA models to end-to-end packet delay and IPG series is presented.
Look-ahead rate adaptation algorithm for DASH under varying network environments Dynamic Adaptive Streaming over HTTP (DASH) is slowly becoming the most popular online video streaming technology. DASH enables the video player to adapt the quality of the multimedia content being downloaded in order to match the varying network conditions. The key challenge with DASH is to decide the optimal video quality for the next video segment under the current network conditions. The aim is to download the next segment before the player experiences buffer-starvation. Several rate adaptation methodologies proposed so far rely on the TCP throughput measurements and the current buffer occupancy. However, these techniques, do not consider any information regarding the next segment that is to be downloaded. They assume that the segment sizes are uniform and assign equal weights to all the segments. However, due to the video encoding techniques employed, different segments of the video with equal playback duration are found to be of different sizes. In the current paper, we propose to list the individual segment characteristics in the Media Presentation Description (MPD) file during the preprocessing stage; this is later used in the segment download time estimations. We also propose a novel rate adaptation methodology that uses the individual segment sizes in addition to the measured TCP throughput and the buffer occupancy estimate for the best video rate to be used for the next segments.
Transport and Storage Systems for 3-D Video Using MPEG-2 Systems, RTP, and ISO File Format Three-dimensional video based on stereo and multiview video representations is currently being introduced to the home through various channels, including broadcast such as via cable, terrestrial and satellite transmission, streaming and download through the Internet, as well as on storage media such as Blu-ray discs. In order to deliver 3-D content to the consumer, different media system technologies have been standardized or are currently under development. The most important standards are MPEG-2 systems, which is used for digital broadcast and storage on Blu-ray discs, real-time transport protocol (RTP), which is used for real-time transmissions over the Internet, and the ISO base media file format, which can be used for progressive download in video-on-demand applications. In this paper, we give an overview of these three system layer approaches, where the main focus is on the multiview video coding (MVC) extension of H.264/AVC and the application of the system approaches to the delivery and storage of MVC.
Cross-layer design of ad hoc networks for real-time video streaming Cross-layer design breaks away from traditional network design where each layer of the protocol stack operates independently. We explore the potential synergies of exchanging information between different layers to support real-time video streaming. In this new approach information is exchanged between different layers of the protocol stack, and end-to-end performance is optimized by adapting to this information at each protocol layer. We discuss key parameters used in the cross-layer information exchange along with the associated cross-layer adaptation. Substantial performance gains through this cross-layer design are demonstrated for video streaming.
ePF-DASH: Energy-efficient prefetching based dynamic adaptive streaming over HTTP CISCO VNI predicted an average annual growth rate of 69.1% for mobile video traffic between 2013 and 2018 and accordingly much academic research related to video streaming has been initiated. In video streaming, Adaptive Bitrate (ABR) is a streaming technique in which a source video is stored on a server at variable encoding rates and each streaming user requests the most appropriate video encoding rate from the server considering their channel capacity or signal power. However, these days, ABR related studies are only focusing on real-time rate adaptation and omitting efficiency in terms of energy. These methods do not consider the energy limited characteristics of mobile devices, which cause dissatisfaction to the streaming users. In this paper, we propose an energy efficient prefetching based dynamic adaptive streaming technique by considering the limited characteristics of the batteries used in mobile devices, in order to reduce the energy waste and provide a similar level of service in terms of the average video rate compared to the latest ABR streaming technique which does not consider the energy consumption.
SOS: The MOS is not enough! When it comes to analysis and interpretation of the results of subjective QoE studies, one often witnesses a lack of attention to the diversity in subjective user ratings. In extreme cases, solely Mean Opinion Scores (MOS) are reported, causing the loss of important information on the user rating diversity. In this paper, we emphasize the importance of considering the Standard deviation of Opinion Scores (SOS) and analyze important characteristics of this measure. As a result, we formulate the SOS hypothesis which postulates a square relationship between the MOS and the SOS. We demonstrate the validity and applicability of the SOS hypothesis for a wide range of studies. The main benefit of the SOS hypothesis is that it allows for a compact, yet still comprehensive statistical summary of subjective user tests. Furthermore, it supports checking the reliability of test result data sets as well as their comparability across different QoE studies.
Genetic Learning Of Fuzzy Rule-Based Classification Systems Cooperating With Fuzzy Reasoning Methods In this paper, we present a multistage genetic learning process for obtaining linguistic fuzzy rule-based classification systems that integrates fuzzy reasoning methods cooperating with the fuzzy rule base and learns the best set of linguistic hedges for the linguistic variable terms. We show the application of the genetic learning process to two well known sample bases, and compare the results with those obtained from different learning algorithms. The results show the good behavior of the proposed method, which maintains the linguistic description of the fuzzy rules. (C) 1998 John Wiley & Sons, Inc.
Granular representation and granular computing with fuzzy sets In this study, we introduce a concept of a granular representation of numeric membership functions of fuzzy sets, which offers a synthetic and qualitative view at fuzzy sets and their ensuing processing. The notion of consistency of the granular representation is formed, which helps regard the problem as a certain optimization task. More specifically, the consistency is referred to a certain operation @f, which gives rise to the concept of @f-consistency. Likewise introduced is a concept of granular consistency with regard to a collection of several operations, Given the essential role played by logic operators in computing with fuzzy sets, detailed investigations include and- and or-consistency as well as (and, or)-consistency of granular representations of membership functions with the logic operators implemented in the form of various t-norms and t-conorms. The optimization framework supporting the realization of the @f-consistent optimization process is provided through particle swarm optimization. Further conceptual and representation issues impacted processing fuzzy sets are discussed as well.
A Stochastic Computational Approach for Accurate and Efficient Reliability Evaluation Reliability is fast becoming a major concern due to the nanometric scaling of CMOS technology. Accurate analytical approaches for the reliability evaluation of logic circuits, however, have a computational complexity that generally increases exponentially with circuit size. This makes intractable the reliability analysis of large circuits. This paper initially presents novel computational models based on stochastic computation; using these stochastic computational models (SCMs), a simulation-based analytical approach is then proposed for the reliability evaluation of logic circuits. In this approach, signal probabilities are encoded in the statistics of random binary bit streams and non-Bernoulli sequences of random permutations of binary bits are used for initial input and gate error probabilities. By leveraging the bit-wise dependencies of random binary streams, the proposed approach takes into account signal correlations and evaluates the joint reliability of multiple outputs. Therefore, it accurately determines the reliability of a circuit; its precision is only limited by the random fluctuations inherent in the stochastic sequences. Based on both simulation and analysis, the SCM approach takes advantages of ease in implementation and accuracy in evaluation. The use of non-Bernoulli sequences as initial inputs further increases the evaluation efficiency and accuracy compared to the conventional use of Bernoulli sequences, so the proposed stochastic approach is scalable for analyzing large circuits. It can further account for various fault models as well as calculating the soft error rate (SER). These results are supported by extensive simulations and detailed comparison with existing approaches.
Kinsight: Localizing and Tracking Household Objects Using Depth-Camera Sensors We solve the problem of localizing and tracking household objects using a depth-camera sensor network. We design and implement Kin sight that tracks household objects indirectly--by tracking human figures, and detecting and recognizing objects from human-object interactions. We devise two novel algorithms: (1) Depth Sweep--that uses depth information to efficiently extract objects from an image, and (2) Context Oriented Object Recognition--that uses location history and activity context along with an RGB image to recognize object sat home. We thoroughly evaluate Kinsight's performance with a rich set of controlled experiments. We also deploy Kinsightin real-world scenarios and show that it achieves an average localization error of about 13 cm.
1.013659
0.01527
0.013037
0.013037
0.013037
0.006263
0.001264
0.000135
0.000039
0.000007
0
0
0
0
Defuzzification within a multicriteria decision model In many cases, criterion values are crisp in nature, and their values are determined by economic instruments, mathematical models, and/or by engineering measurement. However, there are situations when the evaluation of alternatives must include the imprecision of established criteria, and the development of a fuzzy multicriteria decision model is necessary to deal with either "qualitative" (unquantifiable or linguistic) or incomplete information. The proposed fuzzy multicriteria decision model (FMCDM) consists of two phases: the CFCS phase - Converting the Fuzzy data into Crisp Scores, and the MCDM phase - MultiCriteria Decision Making. This model is applicable for defuzzification within the MCDM model with a mixed set of crisp and fuzzy criteria. A newly developed CFCS method is based on the procedure of determining the left and right scores by fuzzy min and fuzzy max, respectively, and the total score is determined as a weighted average according to the membership functions. The advantage of this defuzzification method is illustrated by some examples, comparing the results from three considered methods.
Developing global managers’ competencies using the fuzzy DEMATEL method Modern global managers are required to possess a set of competencies or multiple intelligences in order to meet pressing business challenges. Hence, expanding global managers’ competencies is becoming an important issue. Many scholars and specialists have proposed various competency models containing a list of required competencies. But it is hard for someone to master a broad set of competencies at the same time. Here arises an imperative issue on how to enrich global managers’ competencies by way of segmenting a set of competencies into some portions in order to facilitate competency development with a stepwise mode. To solve this issue involving the vagueness of human judgments, we have proposed an effective method combining fuzzy logic and Decision Making Trial and Evaluation Laboratory (DEMATEL) to segment required competencies for better promoting the competency development of global managers. Additionally, an empirical study is presented to illustrate the application of the proposed method.
A novel fuzzy Dempster-Shafer inference system for brain MRI segmentation Brain Magnetic Resonance Imaging (MRI) segmentation is a challenging task due to the complex anatomical structure of brain tissues as well as intensity non-uniformity, partial volume effects and noise. Segmentation methods based on fuzzy approaches have been developed to overcome the uncertainty caused by these effects. In this study, a novel combination of fuzzy inference system and Dempster-Shafer Theory is applied to brain MRI for the purpose of segmentation where the pixel intensity and the spatial information are used as features. In the proposed modeling, the consequent part of rules is a Dempster-Shafer belief structure. The novelty aspect of this work is that the rules are paraphrased as evidences. The results show that the proposed algorithm, called FDSIS has satisfactory outputs on both simulated and real brain MRI datasets.
A fuzzy MCDM approach for evaluating banking performance based on Balanced Scorecard The paper proposed a Fuzzy Multiple Criteria Decision Making (FMCDM) approach for banking performance evaluation. Drawing on the four perspectives of a Balanced Scorecard (BSC), this research first summarized the evaluation indexes synthesized from the literature relating to banking performance. Then, for screening these indexes, 23 indexes fit for banking performance evaluation were selected through expert questionnaires. Furthermore, the relative weights of the chosen evaluation indexes were calculated by Fuzzy Analytic Hierarchy Process (FAHP). And the three MCDM analytical tools of SAW, TOPSIS, and VIKOR were respectively adopted to rank the banking performance and improve the gaps with three banks as an empirical example. The analysis results highlight the critical aspects of evaluation criteria as well as the gaps to improve banking performance for achieving aspired/desired level. It shows that the proposed FMCDM evaluation model of banking performance using the BSC framework can be a useful and effective assessment tool.
Combining fuzzy AHP with MDS in identifying the preference similarity of alternatives Multidimensional scaling (MDS) analysis is a dimension-reduction technique that is used to estimate the coordinates of a set of objects. However, not every criterion used in multidimensional scaling is equally and precisely weighted in the real world. To address this issue, we use fuzzy analytic hierarchy process (FAHP) to determine the weighting of subjective/perceptive judgments for each criterion and to derive fuzzy synthetic utility values of alternatives in a fuzzy multi-criteria decision-making (FMCDM) environment. Furthermore, we combine FAHP with MDS to identify the similarities and preferences of alternatives in terms of the axes of the space, which represent the perceived attributes and characteristics of those alternatives. By doing so, the visual dimensionality and configuration or pattern of alternatives whose weighted distance structure best fits the input data can be obtained and explained easily. A real case of expatriate assignment decision-making was used to demonstrate that the combination of FAHP and MDS results in a meaningful visual map.
Extensions of the multicriteria analysis with pairwise comparison under a fuzzy environment Multicriteria decision-making (MCDM) problems often involve a complex decision process in which multiple requirements and fuzzy conditions have to be taken into consideration simultaneously. The existing approaches for solving this problem in a fuzzy environment are complex. Combining the concepts of grey relation and pairwise comparison, a new fuzzy MCDM method is proposed. First, the fuzzy analytic hierarchy process (AHP) is used to construct fuzzy weights of all criteria. Then, linguistic terms characterized by L–R triangular fuzzy numbers are used to denote the evaluation values of all alternatives versus subjective and objective criteria. Finally, the aggregation fuzzy assessments of different alternatives are ranked to determine the best selection. Furthermore, this paper uses a numerical example of location selection to demonstrate the applicability of the proposed method. The study results show that this method is an effective means for tackling MCDM problems in a fuzzy environment.
The interval-valued fuzzy TOPSIS method and experimental analysis The purpose of this paper is to extend the TOPSIS method based on interval-valued fuzzy sets in decision analysis. Hwang and Yoon developed the technique for order preference by similarity to ideal solution (TOPSIS) in 1981. TOPSIS has been widely used to rank the preference order of alternatives and determine the optimal choice. Considering the fact that it is difficult to precisely attach the numerical measures to the relative importance of the attributes and to the impacts of the alternatives on these attributes in some cases, therefore, the TOPSIS method has been extended for interval-valued fuzzy data in this paper. In addition, a comprehensive experimental analysis to observe the interval-valued fuzzy TOPSIS results yielded by different distance measures is presented. A comparative analysis of interval-valued fuzzy TOPSIS rankings from each distance measure is illustrated with discussions on consistency rates, contradiction rates, and average Spearman correlation coefficients. Finally, a second-order regression model is provided to highlight the effects of the number of alternatives, the number of attributes, and distance measures on average Spearmen correlation coefficients.
Signed Distanced-Based Topsis Method For Multiple Criteria Decision Analysis Based On Generalized Interval-Valued Fuzzy Numbers The theory of interval valued fuzzy sets is very valuable for modeling impressions of decision makers. In addition, it gives ability to quantify the ambiguous nature of subjective judgments in an easy way. In this paper, by extending the technique for order preference by similarity to ideal solution (TOPSIS), it is proposed a useful method based on generalized interval valued trapezoidal fuzzy numbers (GITrFNs) for solving multiple criteria decision analysis (MCDA) problems. In view of complexity in handling sophisticated data of GITrFNs, this paper employs the concept of signed distances to establish a simple and effective MCDA method based on the main structure of TOPSIS. An algorithm based on TOPSIS method is established to determine the priority order of given alternatives by using properties of signed distances. Finally, the feasibility of the proposed method is illustrated by a practical example of supplier selection.
Multi-Attribute Group Decision Making Methods With Proportional 2-Tuple Linguistic Assessments And Weights The proportional 2-tuple linguistic model provides a tool to deal with linguistic term sets that are not uniformly and symmetrically distributed. This study further develops multi-attribute group decision making methods with linguistic assessments and linguistic weights, based on the proportional 2-tuple linguistic model. Firstly, this study defines some new operations in proportional 2-tuple linguistic model, including weighted average aggregation operator with linguistic weights, ordered weighted average operator with linguistic weights and the distance between proportional linguistic 2-tuples. Then, four multi-attribute group decision making methods are presented. They are the method based on the proportional 2-tuple linguistic aggregation operator, technique for order preference by similarity to ideal solution (TOPSIS) with proportional 2-tuple linguistic information, elimination et choice translating reality (ELECTRE) with proportional 2-tuple linguistic information, preference ranking organization methods for enrichment evaluations (PROMETHEE) with proportional 2-tuple linguistic information. Finally, an example is given to illustrate the effectiveness of the proposed methods.
Fuzzy assessment method on sampling survey analysis Developing a well-designed market survey questionnaire will ensure that surveyors get the information they need about the target market. Traditional sampling survey via questionnaire, which rates item by linguistic variables, possesses the vague nature. It has difficulty in reflecting interviewee's incomplete and uncertain thought. Therefore, if we can use fuzzy sense to express the degree of interviewee's feelings based on his own concept, the sampling result will be closer to interviewee's real thought. In this study, we propose the fuzzy sense on sampling survey to do aggregated assessment analysis. The proposed fuzzy assessment method on sampling survey analysis is easily to assess the sampling survey and evaluate the aggregative evaluation.
Type-2 fuzzy activation function for multilayer feedforward neural networks This paper presents a new type-2 fuzzy based activation function for multilayer feedforward neural networks. Instead of other activation functions, the proposed approach uses a type-2 fuzzy set to accelerate backpropagation learning and reduce number of neurons in the complex net. Furthermore, the type-2 fuzzy based activation function provides to minimize the effects of uncertainties on the neural network. Performance of the type-2 fuzzy activation function is demonstrated by exor and speed estimation of induction motor problems in simulations. The comparison among the proposed activation function and commonly used activation functions shows accelerated convergence and eliminated uncertainties with the proposed method. The simulation results showed that the proposed method is more suitable to complex systems.
The effects of multiview depth video compression on multiview rendering This article investigates the interaction between different techniques for depth compression and view synthesis rendering with multiview video plus scene depth data. Two different approaches for depth coding are compared, namely H.264/MVC, using temporal and inter-view reference images for efficient prediction, and the novel platelet-based coding algorithm, characterized by being adapted to the special characteristics of depth-images. Since depth-images are a 2D representation of the 3D scene geometry, depth-image errors lead to geometry distortions. Therefore, the influence of geometry distortions resulting from coding artifacts is evaluated for both coding approaches in two different ways. First, the variation of 3D surface meshes is analyzed using the Hausdorff distance and second, the distortion is evaluated for 2D view synthesis rendering, where color and depth information are used together to render virtual intermediate camera views of the scene. The results show that-although its rate-distortion (R-D) performance is worse-platelet-based depth coding outperforms H.264, due to improved sharp edge preservation. Therefore, depth coding needs to be evaluated with respect to geometry distortions.
Nearly optimal sparse fourier transform We consider the problem of computing the k-sparse approximation to the discrete Fourier transform of an n-dimensional signal. We show: An O(k log n)-time randomized algorithm for the case where the input signal has at most k non-zero Fourier coefficients, and An O(k log n log(n/k))-time randomized algorithm for general input signals. Both algorithms achieve o(n log n) time, and thus improve over the Fast Fourier Transform, for any k=o(n). They are the first known algorithms that satisfy this property. Also, if one assumes that the Fast Fourier Transform is optimal, the algorithm for the exactly k-sparse case is optimal for any k = nΩ(1). We complement our algorithmic results by showing that any algorithm for computing the sparse Fourier transform of a general signal must use at least Ω(k log (n/k) / log log n) signal samples, even if it is allowed to perform adaptive sampling.
Neuroinformatics I: Fuzzy Neural Networks of More-Equal-Less Logic (Static) This article analyzes the possibilities of neural nets composed of neurons - the summa- tors of continuously varied impulse frequencies characterized by non-linearity N , when informa- tional operations of fuzzy logic are performed. According to the facts of neurobiological research the neurons are divided into stellate and pyramidal ones, and their functional-static characteris- tics are presented. The operations performed by stellate neurons are characterized as qualitative (not quantitative) informational estimations "more", "less", "equal", i.e., they function according to "more-equal-less" (M-E-L) logic. Pyramidal neurons with suppressing entries perform algebraic signal operations and as a result of them the output signals are controlled by means of universal logical function "NON disjunction" (Pierce arrow or Dagger function). It is demonstrated how ste- llate and pyramidal neurons can be used to synthesize the neural nets functioning in parallel and realizing all logical and elementary algebraic functions as well as to perform the conditional con- trolled operations of information processing. Such neural nets functioning by principles of M-E-L and suppression logic can perform signals' classification, filtration and other informational proce- dures by non-quantitative assessment, and their informational possibilities (the a mount of qualita- tive states), depending on the number n of analyzing elements-neurons, are proportional to n! or even to (2n) ∗ n!, i.e., much bigger than the possibilities of traditional informational automats func- tioning by binary principle. In summary it is stated that neural nets are informational subsystems of parallel functioning and analogical neurocomputers of hybrid action.
1.014278
0.012413
0.011302
0.007319
0.00493
0.003283
0.000925
0.000207
0.000095
0.000037
0
0
0
0
Compressive sensing on a CMOS separable transform image sensor This paper demonstrates a computational image sensor capable of implementing compressive sensing operations. Instead of sensing raw pixel data, this image sensor projects the image onto a separable 2-D basis set and measures the corresponding expansion coefficients. The inner products are computed in the analog domain using a computational focal plane and an analog vector-matrix multiplier (VMM). This is more than mere postprocessing, as the processing circuity is integrated as part of the sensing circuity itself. We implement compressive imaging on the sensor by using pseudorandom vectors called noiselets for the measurement basis. This choice allows us to reconstruct the image from only a small percentage of the transform coefficients. This effectively compresses the image without any digital computation and reduces the throughput of the analog-to-digital converter (ADC). The reduction in throughput has the potential to reduce power consumption and increase the frame rate. The general architecture and a detailed circuit implementation of the image sensor are discussed. We also present experimental results that demonstrate the advantages of using the sensor for compressive imaging rather than more traditional coded imaging strategies.
Distributed sampling of signals linked by sparse filtering: theory and applications We study the distributed sampling and centralized reconstruction of two correlated signals, modeled as the input and output of an unknown sparse filtering operation. This is akin to a Slepian-Wolf setup, but in the sampling rather than the lossless compression case. Two different scenarios are considered: In the case of universal reconstruction, we look for a sensing and recovery mechanism that works for all possible signals, whereas in what we call almost sure reconstruction, we allow to have a small set (with measure zero) of unrecoverable signals. We derive achievability bounds on the number of samples needed for both scenarios. Our results show that, only in the almost sure setup can we effectively exploit the signal correlations to achieve effective gains in sampling efficiency. In addition to the above theoretical analysis, we propose an efficient and robust distributed sampling and reconstruction algorithm based on annihilating filters. We evaluate the performance of our method in one synthetic scenario, and two practical applications, including the distributed audio sampling in binaural hearing aids and the efficient estimation of room impulse responses. The numerical results confirm the effectiveness and robustness of the proposed algorithm in both synthetic and practical setups.
Sparse representation and position prior based face hallucination upon classified over-complete dictionaries In compressed sensing theory, decomposing a signal based upon redundant dictionaries is of considerable interest for data representation in signal processing. The signal is approximated by an over-complete dictionary instead of an orthonormal basis for adaptive sparse image decompositions. Existing sparsity-based super-resolution methods commonly train all atoms to construct only a single dictionary for super-resolution. However, this approach results in low precision of reconstruction. Furthermore, the process of generating such dictionary usually involves a huge computational cost. This paper proposes a sparse representation and position prior based face hallucination method for single face image super-resolution. The high- and low-resolution atoms for the first time are classified to form local dictionaries according to the different regions of human face, instead of generating a single global dictionary. Different local dictionaries are used to hallucinate the corresponding regions of face. The patches of the low-resolution face inputs are approximated respectively by a sparse linear combination of the atoms in the corresponding over-complete dictionaries. The sparse coefficients are then obtained to generate high-resolution data under the constraint of the position prior of face. Experimental results illustrate that the proposed method can hallucinate face images of higher quality with a lower computational cost compared to other existing methods.
Motion estimated and compensated compressed sensing dynamic magnetic resonance imaging: What we can learn from video compression techniques Compressed sensing has become an extensive research area in MR community because of the opportunity for unprecedented high spatio-temporal resolution reconstruction. Because dynamic magnetic resonance imaging (MRI) usually has huge redundancy along temporal direction, compressed sensing theory can be effectively used for this application. Historically, exploiting the temporal redundancy has been the main research topics in video compression technique. This article compares the similarity and differences of compressed sensing dynamic MRI and video compression and discusses what MR can learn from the history of video compression research. In particular, we demonstrate that the motion estimation and compensation in video compression technique can be also a powerful tool to reduce the sampling requirement in dynamic MRI. Theoretical derivation and experimental results are presented to support our view. © 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 81–98, 2010
Texas Hold 'Em algorithms for distributed compressive sensing This paper develops a new class of algorithms for signal recovery in the distributed compressive sensing (DCS) framework. DCS exploits both intra-signal and inter-signal correlations through the concept of joint sparsity to further reduce the number of measurements required for recovery. DCS is well-suited for sensor network applications due to its universality, computational asymmetry, tolerance to quantization and noise, and robustness to measurement loss. In this paper we propose recovery algorithms for the sparse common and innovation joint sparsity model. Our approach leads to a class of efficient algorithms, the Texas Hold 'Em algorithms, which are scalable both in terms of communication bandwidth and computational complexity.
Image representation by compressive sensing for visual sensor networks This paper addresses the image representation problem in visual sensor networks. We propose a new image representation method for visual sensor networks based on compressive sensing (CS). CS is a new sampling method for sparse signals, which is able to compress the input data in the sampling process. Combining both signal sampling and data compression, CS is more capable of image representation for reducing the computation complexity in image/video encoder in visual sensor networks where computation resource is extremely limited. Since CS is more efficient for sparse signals, in our scheme, the input image is firstly decomposed into two components, i.e., dense and sparse components; then the dense component is encoded by the traditional approach (JPEG or JPEG 2000) while the sparse component is encoded by a CS technique. In order to improve the rate distortion performance, we leverage the strong correlation between dense and sparse components by using a piecewise autoregressive model to construct a prediction of the sparse component from the corresponding dense component. Given the measurements and the prediction of the sparse component as initial guess, we use projection onto convex set (POCS) to reconstruct the sparse component. Our method considerably reduces the number of random measurements needed for CS reconstruction and the decoding computational complexity, compared to the existing CS methods. In addition, our experimental results show that our method may achieves up to 2dB gain in PSNR over the existing CS based schemes, for the same number of measurements.
Fast image recovery using variable splitting and constrained optimization We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.
Compressive-projection principal component analysis. Principal component analysis (PCA) is often central to dimensionality reduction and compression in many applications, yet its data-dependent nature as a transform computed via expensive eigendecomposition often hinders its use in severely resource-constrained settings such as satellite-borne sensors. A process is presented that effectively shifts the computational burden of PCA from the resource-constrained encoder to a presumably more capable base-station decoder. The proposed approach, compressive-projection PCA (CPPCA), is driven by projections at the sensor onto lower-dimensional subspaces chosen at random, while the CPPCA decoder, given only these random projections, recovers not only the coefficients associated with the PCA transform, but also an approximation to the PCA transform basis itself. An analysis is presented that extends existing Rayleigh-Ritz theory to the special case of highly eccentric distributions; this analysis in turn motivates a reconstruction process at the CPPCA decoder that consists of a novel eigenvector reconstruction based on a convex-set optimization driven by Ritz vectors within the projected subspaces. As such, CPPCA constitutes a fundamental departure from traditional PCA in that it permits its excellent dimensionality-reduction and compression performance to be realized in an light-encoder/heavy-decoder system architecture. In experimental results, CPPCA outperforms a multiple-vector variant of compressed sensing for the reconstruction of hyperspectral data.
An Anisotropic Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data This work proposes and analyzes an anisotropic sparse grid stochastic collocation method for solving partial differential equations with random coefficients and forcing terms (input data of the model). The method consists of a Galerkin approximation in the space variables and a collocation, in probability space, on sparse tensor product grids utilizing either Clenshaw-Curtis or Gaussian knots. Even in the presence of nonlinearities, the collocation approach leads to the solution of uncoupled deterministic problems, just as in the Monte Carlo method. This work includes a priori and a posteriori procedures to adapt the anisotropy of the sparse grids to each given problem. These procedures seem to be very effective for the problems under study. The proposed method combines the advantages of isotropic sparse collocation with those of anisotropic full tensor product collocation: the first approach is effective for problems depending on random variables which weigh approximately equally in the solution, while the benefits of the latter approach become apparent when solving highly anisotropic problems depending on a relatively small number of random variables, as in the case where input random variables are Karhunen-Loève truncations of “smooth” random fields. This work also provides a rigorous convergence analysis of the fully discrete problem and demonstrates (sub)exponential convergence in the asymptotic regime and algebraic convergence in the preasymptotic regime, with respect to the total number of collocation points. It also shows that the anisotropic approximation breaks the curse of dimensionality for a wide set of problems. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo. In particular, for moderately large-dimensional problems, the sparse grid approach with a properly chosen anisotropy seems to be very efficient and superior to all examined methods.
Nonparametric multivariate density estimation: a comparative study The paper algorithmically and empirically studies two major types of nonparametric multivariate density estimation techniques, where no assumption is made about the data being drawn from any of known parametric families of distribution. The first type is the popular kernel method (and several of its variants) which uses locally tuned radial basis (e.g., Gaussian) functions to interpolate the multidimensional density; the second type is based on an exploratory projection pursuit technique which interprets the multidimensional density through the construction of several 1D densities along highly “interesting” projections of multidimensional data. Performance evaluations using training data from mixture Gaussian and mixture Cauchy densities are presented. The results show that the curse of dimensionality and the sensitivity of control parameters have a much more adverse impact on the kernel density estimators than on the projection pursuit density estimators
An optimal algorithm for approximate nearest neighbor searching fixed dimensions Consider a set of S of n data points in real d-dimensional space, Rd, where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess S into a data structure, so that given any query point q ∈ Rd, is the closest point of S to q can be reported quickly. Given any positive real &egr;, data point p is a (1 +&egr;)-approximate nearest neighbor of q if its distance from q is within a factor of (1 + &egr;) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in Rd in O(dn log n) time and O(dn) space, so that given a query point q ∈ Rd, and &egr; 0, a (1 + &egr;)-approximate nearest neighbor of q can be computed in O(cd, &egr; log n) time, where cd,&egr;≤d 1 + 6d/e;d is a factor depending only on dimension and &egr;. In general, we show that given an integer k ≥ 1, (1 + &egr;)-approximations to the k nearest neighbors of q can be computed in additional O(kd log n) time.
Sensing increased image resolution using aperture masks We present a technique to construct increased-resolution images from multiple photos taken without moving the cam- era or the sensor. Like other super-resolution techniques, we capture and merge multiple images, but instead of mov- ing the camera sensor by sub-pixel distances for each im- age, we change masks in the lens aperture and slightly de- focus the lens. The resulting capture system is simpler, and tolerates modest mask registration errors well. We present a theoretical analysis of the camera and image merging method, show both simulated results and actual results from a crudely modified consumer camera, and compare its re- sults to robust 'blind' methods that rely on uncontrolled camera displacements.
An Almost Optimal Unrestricted Fast Johnson-Lindenstrauss Transform The problems of random projections and sparse reconstruction have much in common and individually received much attention. Surprisingly, until now they progressed in parallel and remained mostly separate. Here, we employ new tools from probability in Banach spaces that were successfully used in the context of sparse reconstruction to advance on an open problem in random pojection. In particular, we generalize and use an intricate result by Rudelson and Vershynin for sparse reconstruction which uses Dudley's theorem for bounding Gaussian processes. Our main result states that any set of $N = \exp(\tilde{O}(n))$ real vectors in $n$ dimensional space can be linearly mapped to a space of dimension $k=O(\log N\polylog(n))$, while (1) preserving the pairwise distances among the vectors to within any constant distortion and (2) being able to apply the transformation in time $O(n\log n)$ on each vector. This improves on the best known $N = \exp(\tilde{O}(n^{1/2}))$ achieved by Ailon and Liberty and $N = \exp(\tilde{O}(n^{1/3}))$ by Ailon and Chazelle. The dependence in the distortion constant however is believed to be suboptimal and subject to further investigation. For constant distortion, this settles the open question posed by these authors up to a $\polylog(n)$ factor while considerably simplifying their constructions.
An Interval-Valued Intuitionistic Fuzzy Rough Set Model Given a widespread interest in rough sets as being applied to various tasks of data analysis it is not surprising at all that we have witnessed a wave of further generalizations and algorithmic enhancements of this original concept. This paper proposes an interval-valued intuitionistic fuzzy rough model by means of integrating the classical Pawlak rough set theory with the interval-valued intuitionistic fuzzy set theory. Firstly, some concepts and properties of interval-valued intuitionistic fuzzy set and interval-valued intuitionistic fuzzy relation are introduced. Secondly, a pair of lower and upper interval-valued intuitionistic fuzzy rough approximation operators induced from an interval-valued intuitionistic fuzzy relation is defined, and some properties of approximation operators are investigated in detail. Furthermore, by introducing cut sets of interval-valued intuitionistic fuzzy sets, classical representations of interval-valued intuitionistic fuzzy rough approximation operators are presented. Finally, the connections between special interval-valued intuitionistic fuzzy relations and interval-valued intuitionistic fuzzy rough approximation operators are constructed, and the relationships of this model and the others rough set models are also examined.
1.036877
0.04
0.04
0.036667
0.02
0.009167
0.003057
0.00038
0.000037
0.000002
0
0
0
0
Algorithmic Macromodelling Methods for Mixed-Signal Systems Electronic systems today, especially those for communications and sensing, are typically composed of a complex mix of digital and mixed-signal circuit blocks. Verifying such systems prior to fabrication is challenging due to their size and complexity. Automated model generation is becoming an increasingly important component of methodologies for effective system verification. In this paper, we review algorithmically-based model generation methods for linear and nonlinear systems. We comment on the development of such macromodelling methods over the last decade, clarify their domains of application and evaluate their strengths and current limitations.
General-Purpose Nonlinear Model-Order Reduction Using Piecewise-Polynomial Representations We present algorithms for automated macromodeling of nonlinear mixed-signal system blocks. A key feature of our methods is that they automate the generation of general-purpose macromodels that are suitable for a wide range of time- and frequency-domain analyses important in mixed-signal design flows. In our approach, a nonlinear circuit or system is approximated using piecewise-polynomial (PWP) representations. Each polynomial system is reduced to a smaller one via weakly nonlinear polynomial model-reduction methods. Our approach, dubbed PWP, generalizes recent trajectory-based piecewise-linear approaches and ties them with polynomial-based model-order reduction, which inherently captures stronger nonlinearities within each region. PWP-generated macromodels not only reproduce small-signal distortion and intermodulation properties well but also retain fidelity in large-signal transient analyses. The reduced models can be used as drop-in replacements for large subsystems to achieve fast system-level simulation using a variety of time- and frequency-domain analyses (such as dc, ac, transient, harmonic balance, etc.). For the polynomial reduction step within PWP, we also present a novel technique [dubbed multiple pseudoinput (MPI)] that combines concepts from proper orthogonal decomposition with Krylov-subspace projection. We illustrate the use of PWP and MPI with several examples (including op-amps and I/O buffers) and provide important implementation details. Our experiments indicate that it is easy to obtain speedups of about an order of magnitude with push-button nonlinear macromodel-generation algorithms.
Model reduction of time-varying linear systems using approximate multipoint Krylov-subspace projectors h this paper a method is presented for model reduction of systemsdescribedby time-var-ying differential-algebraicequations. ~i method aUowsautomated extraction of reduced modek for nordinearM blocks,such as mixers and ~ters, that havea near-Enear signalpath but may containstronglynordinear time-varyingcomponents. me modek have the accuracy of a transistor-levelnordinearsimulationbut are very compact and so can be used in system-levelsimulationand design. me modelreductionprocedureis basedona multipointrational approximationalgorithm formed by orthogonalprojection of the originaltime-varyingHnearsystemintoan approximate@lov subspace. me modek obtainedfrom the approximate fiylovsubspaceprojector an be obtainedmuch more easilythan the exact projectors but shownegligibledifferencein accuracy.
A Piecewise-Linear Moment-Matching Approach to Parameterized Model-Order Reduction for Highly Nonlinear Systems This paper presents a parameterized reduction technique for highly nonlinear systems. In our approach, we first approximate the nonlinear system with a convex combination of parameterized linear models created by linearizing the nonlinear system at points along training trajectories. Each of these linear models is then projected using a moment-matching scheme into a low-order subspace, resulting in a parameterized reduced-order nonlinear system. Several options for selecting the linear models and constructing the projection matrix are presented and analyzed. In addition, we propose a training scheme which automatically selects parameter-space training points by approximating parameter sensitivities. Results and comparisons are presented for three examples which contain distributed strong nonlinearities: a diode transmission line, a microelectromechanical switch, and a pulse-narrowing nonlinear transmission line. In most cases, we are able to accurately capture the parameter dependence over the parameter ranges of plusmn50% from the nominal values and to achieve an average simulation speedup of about 10x.
Asymptotic waveform evaluation for timing analysis Asymptotic waveform evaluation (AWE) provides a generalized approach to linear RLC circuit response approximations. The RLC interconnect model may contain floating capacitors, grounded resistors, inductors, and even linear controlled sources. The transient portion of the response is approximated by matching the initial boundary conditions and the first 2q-1 moments of the exact response to a lower-order q-pole model. For the case of an RC tree model, a first-order AWE approximation reduces to the RC tree methods
Variations, margins, and statistics Design margining is used to account for design uncertainties in the measurement of performance, and thereby ensures that actual manufactured parts will operate in within predicted bounds. As process and environmental variations become increasingly severe and complex in nanometer process technology, design margining overheads have increased correspondingly. This paper describes the types of process and environmental variations, their impact on performance, and the traditional design margining process used to account for these uncertainties. We consider statistical timing (SSTA) in the context of its ability to reduce timing margins through more accurate modeling of variations, and quantify potential benefits of SSTA for setup and hold time margin reduction. Combining SSTA with complementary techniques for systematic variation-aware and voltage-variation-aware timing provides meaningful design margin reduction. We introduce the concept of activity based operating condition as a supporting construct for variation-aware STA flows
NIST Net: a Linux-based network emulation tool Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.
Compressive wireless sensing General Terms Compressive Sampling is an emerging theory that is based on the fact that a relatively small number of random pro-jections of a signal can contain most of its salient informa-tion. In this paper, we introduce the concept of Compressive Wireless Sensing for sensor networks in which a fusion center retrieves signal field information from an ensemble of spa-tially distributed sensor nodes. Energy and bandwidth are scarce resources in sensor networks and the relevant metrics of interest in our context are 1) the latency involved in in-formation retrieval; and 2) the associated power-distortion trade-o. It is generally recognized that given su cient prior knowledge about the sensed data (e. g., statistical character-ization, homogeneity etc. ), there exist schemes that have very favorable power-distortion-latency trade-o s. We pro-pose a distributed matched source-channel communication scheme, based in part on recent results in compressive sam-pling theory, for estimation of sensed data at the fusion cen-ter and analyze, as a function of number of sensor nodes, the trade-o s between power, distortion and latency. Compres-sive wireless sensing is a universal scheme in the sense that it requires no prior knowledge about the sensed data. This universality, however, comes at the cost of optimality (in terms of a less favorable power-distortion-latency trade-o ) and we quantify this cost relative to the case when su cient prior information about the sensed data is assumed.
Analysis of the domain mapping method for elliptic diffusion problems on random domains. In this article, we provide a rigorous analysis of the solution to elliptic diffusion problems on random domains. In particular, based on the decay of the Karhunen-Loève expansion of the domain perturbation field, we establish decay rates for the derivatives of the random solution that are independent of the stochastic dimension. For the implementation of a related approximation scheme, like quasi-Monte Carlo quadrature, stochastic collocation, etc., we propose parametric finite elements to compute the solution of the diffusion problem on each individual realization of the domain generated by the perturbation field. This simplifies the implementation and yields a non-intrusive approach. Having this machinery at hand, we can easily transfer it to stochastic interface problems. The theoretical findings are complemented by numerical examples for both, stochastic interface problems and boundary value problems on random domains.
Coding Algorithms for 3DTV—A Survey Research efforts on 3DTV technology have been strengthened worldwide recently, covering the whole media processing chain from capture to display. Different 3DTV systems rely on different 3D scene representations that integrate various types of data. Efficient coding of these data is crucial for the success of 3DTV. Compression of pixel-type data including stereo video, multiview video, and associated depth or disparity maps extends available principles of classical video coding. Powerful algorithms and open international standards for multiview video coding and coding of video plus depth data are available and under development, which will provide the basis for introduction of various 3DTV systems and services in the near future. Compression of 3D mesh models has also reached a high level of maturity. For static geometry, a variety of powerful algorithms are available to efficiently compress vertices and connectivity. Compression of dynamic 3D geometry is currently a more active field of research. Temporal prediction is an important mechanism to remove redundancy from animated 3D mesh sequences. Error resilience is important for transmission of data over error prone channels, and multiple description coding (MDC) is a suitable way to protect data. MDC of still images and 2D video has already been widely studied, whereas multiview video and 3D meshes have been addressed only recently. Intellectual property protection of 3D data by watermarking is a pioneering research area as well. The 3D watermarking methods in the literature are classified into three groups, considering the dimensions of the main components of scene representations and the resulting components after applying the algorithm. In general, 3DTV coding technology is maturating. Systems and services may enter the market in the near future. However, the research area is relatively young compared to coding of other types of media. Therefore, there is still a lot of room for improvement and new development o- f algorithms.
Random Alpha Pagerank We suggest a revision to the PageRank random surfer model that considers the influence of a population of random surfers on the PageRank vector. In the revised model, each member of the population has its own teleportation parameter chosen from a probability distribution, and consequently, the ranking vector is random. We propose three algorithms for computing the statistics of the random ranking vector based respectively on (i) random sampling, (ii) paths along the links of the underlying graph, and (iii) quadrature formulas. We find that the expectation of the random ranking vector produces similar rankings to its deterministic analogue, but the standard deviation gives uncorrelated information (under a Kendall-tau metric) with myriad potential uses. We examine applications of this model to web spam.
Directional relative position between objects in image processing: a comparison between fuzzy approaches The importance of describing relationships between objects has been highlighted in works in very different areas, including image understanding. Among these relationships, directional relative position relations are important since they provide an important information about the spatial arrangement of objects in the scene. Such concepts are rather ambiguous, they defy precise definitions, but human beings have a rather intuitive and common way of understanding and interpreting them. Therefore in this context, fuzzy methods are appropriate to provide consistent definitions that integrate both quantitative and qualitative knowledge, thus providing a computational representation and interpretation of imprecise spatial relations, expressed in a linguistic way, and including quantitative knowledge. Several fuzzy approaches have been developed in the literature, and the aim of this paper is to review and compare them according to their properties and according to the types of questions they seek to answer.
New Type-2 Rule Ranking Indices for Designing Parsimonious Interval Type-2 Fuzzy Logic Systems In this paper, we propose two novel indices for type-2 fuzzy rule ranking to identify the most influential fuzzy rules in designing type-2 fuzzy logic systems, and name them as R-values and c-values of fuzzy rules separately. The R-values of type-2 fuzzy rules are obtained by applying QR decomposition in which there is no need to estimate a rank as required in the SVD-QR with column pivoting algorithm. The c-values of type-2 fuzzy rules are suggested to rank rules based on the effects of rule consequents. Experimental results on a signal recovery problem have shown that by using the proposed indices the most influential type-2 fuzzy rules can be effectively selected to construct parsimonious type-2 fuzzy models while the system performances are kept at a satisfied level.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.2
0.05
0.028571
0.022222
0.003226
0
0
0
0
0
0
0
0
0
Weighted Reduced Basis Method for Stochastic Optimal Control Problems with Elliptic PDE Constraint In this paper we develop and analyze an efficient computational method for solving stochastic optimal control problems constrained by an elliptic partial differential equation (PDE) with random input data. We first prove both existence and uniqueness of the optimal solution. Regularity of the optimal solution in the stochastic space is studied in view of the analysis of stochastic approximation error. For numerical approximation, we employ a finite element method for the discretization of physical variables, and a stochastic collocation method for the discretization of random variables. In order to alleviate the computational effort, we develop a model order reduction strategy based on a weighted reduced basis method. A global error analysis of the numerical approximation is carried out, and several numerical tests are performed to verify our analysis.
An Efficient Numerical Method for Acoustic Wave Scattering in Random Media This paper is concerned with developing efficient numerical methods for acoustic wave scattering in random media which can be expressed as random perturbations of homogeneous media. We first analyze the random Helmholtz problem by deriving some wavenumber-explicit solution estimates. We then establish a multimodes representation of the solution as a power series of the perturbation parameter and analyze its finite modes approximations. Based on this multimodes representation, we develop a Monte Carlo interior penalty discontinuous Galerkin (MCIP-DG) method for approximating the mode functions, which are governed by recursively defined nearly deterministic Helmholtz equations. Optimal order error estimates are derived for the method, and an efficient algorithm, which is based on the LU direct solver, is also designed for efficiently implementing the proposed multimodes MCIP-DG method. It is proved that the computational complexity of the whole algorithm is comparable to that of solving one deterministic Helmholtz problem using the LU direct solver. Numerical experiments are provided to validate the theoretical results and to gauge the performance of the proposed numerical method and algorithm.
Reduced Basis Methods for Parameterized Partial Differential Equations with Stochastic Influences Using the Karhunen--Loève Expansion We consider parametric partial differential equations (PPDEs) with stochastic influences, e. g., in terms of random coefficients. Using standard discretizations such as finite elements, this often amounts to high-dimensional problems. In a many-query context, the PPDE has to be solved for various instances of the deterministic parameter as well as the stochastic influences. To decrease computational complexity, we derive a reduced basis method (RBM), where the uncertainty in the coefficients is modeled using Karhunen-Lo` eve (KL) expansions. We restrict ourselves to linear coercive problems with linear and quadratic output functionals. A new a posteriori error analysis is presented that generalizes and extends some of the results by Boyaval et al. [Comput. Methods Appl. Mech. Engrg., 198 (2009), pp. 3187-3206]. The additional KL-truncation error is analyzed for the state, output functionals, and also for statistical outputs such as mean and variance. Error estimates for quadratic outputs are obtained using additional nonstandard dual problems. Numerical experiments for a two-dimensional porous medium demonstrate the effectivity of this approach.
A Trust-Region Algorithm with Adaptive Stochastic Collocation for PDE Optimization under Uncertainty. The numerical solution of optimization problems governed by partial differential equations (PDEs) with random coefficients is computationally challenging because of the large number of deterministic PDE solves required at each optimization iteration. This paper introduces an efficient algorithm for solving such problems based on a combination of adaptive sparse-grid collocation for the discretization of the PDE in the stochastic space and a trust-region framework for optimization and fidelity management of the stochastic discretization. The overall algorithm adapts the collocation points based on the progress of the optimization algorithm and the impact of the random variables on the solution of the optimization problem. It frequently uses few collocation points initially and increases the number of collocation points only as necessary, thereby keeping the number of deterministic PDE solves low while guaranteeing convergence. Currently an error indicator is used to estimate gradient errors due to adaptive stochastic collocation. The algorithm is applied to three examples, and the numerical results demonstrate a significant reduction in the total number of PDE solves required to obtain an optimal solution when compared with a Newton conjugate gradient algorithm applied to a fixed high-fidelity discretization of the optimization problem.
Error Estimates of Stochastic Optimal Neumann Boundary Control Problems We study mathematically and computationally optimal control problems for stochastic partial differential equations with Neumann boundary conditions. The control objective is to minimize the expectation of a cost functional, and the control is of the deterministic, boundary-value type. Mathematically, we prove the existence of an optimal solution and of a Lagrange multiplier; we represent the input data in terms of their Karhunen-Loève expansions and deduce the deterministic optimality system of equations. Computationally, we approximate the finite element solution of the optimality system and estimate its error through the discretizations with respect to both spatial and random parameter spaces.
Multi-level Monte Carlo Finite Element method for elliptic PDEs with stochastic coefficients In Monte Carlo methods quadrupling the sample size halves the error. In simulations of stochastic partial differential equations (SPDEs), the total work is the sample size times the solution cost of an instance of the partial differential equation. A Multi-level Monte Carlo method is introduced which allows, in certain cases, to reduce the overall work to that of the discretization of one instance of the deterministic PDE. The model problem is an elliptic equation with stochastic coefficients. Multi-level Monte Carlo errors and work estimates are given both for the mean of the solutions and for higher moments. The overall complexity of computing mean fields as well as k-point correlations of the random solution is proved to be of log-linear complexity in the number of unknowns of a single Multi-level solve of the deterministic elliptic problem. Numerical examples complete the theoretical analysis.
Model Reduction for Large-Scale Systems with High-Dimensional Parametric Input Space A model-constrained adaptive sampling methodology is proposed for the reduction of large-scale systems with high-dimensional parametric input spaces. Our model reduction method uses a reduced basis approach, which requires the computation of high-fidelity solutions at a number of sample points throughout the parametric input space. A key challenge that must be addressed in the optimization, control, and probabilistic settings is the need for the reduced models to capture variation over this parametric input space, which, for many applications, will be of high dimension. We pose the task of determining appropriate sample points as a PDE-constrained optimization problem, which is implemented using an efficient adaptive algorithm that scales well to systems with a large number of parameters. The methodology is demonstrated using examples with parametric input spaces of dimension 11 and 21, which describe thermal analysis and design of a heat conduction fin, and compared with statistically based sampling methods. For these examples, the model-constrained adaptive sampling leads to reduced models that, for a given basis size, have error several orders of magnitude smaller than that obtained using the other methods.
A Probabilistic and RIPless Theory of Compressed Sensing This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution $F$; it includes all standard models—e.g., Gaussian, frequency measurements—discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution $F$ obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) to hold near the sparsity level in question, nor a random model for the signal. As an example, the paper shows that a signal with $s$ nonzero entries can be faithfully recovered from about $s \log n$ Fourier coefficients that are contaminated with noise.
NESTA: A Fast and Accurate First-Order Method for Sparse Recovery Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. This paper applies a smoothing technique and an accelerated first-order algorithm, both from Nesterov [Math. Program. Ser. A, 103 (2005), pp. 127-152], and demonstrates that this approach is ideally suited for solving large-scale compressed sensing reconstruction problems as (1) it is computationally efficient, (2) it is accurate and returns solutions with several correct digits, (3) it is flexible and amenable to many kinds of reconstruction problems, and (4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters. Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed state-of-the-art methods. We also apply the algorithm to solve other problems for which there are fewer alternatives, such as total-variation minimization and convex programs seeking to minimize the $\ell_1$ norm of $Wx$ under constraints, in which $W$ is not diagonal. The code is available online as a free package in the MATLAB language.
A compressed sensing approach for biological microscopic image processing In fluorescence microscopy the noise level and the photobleaching are cross-dependent problems since reducing exposure time to reduce photobleaching degrades image quality while increasing noise level. These two problems cannot be solved independently as a post-processing task, hence the most important contribution in this work is to a-priori denoise and reduce photobleaching simultaneously by using the Compressed Sensing framework (CS). In this paper, we propose a CS-based denoising framework, based on statistical properties of the CS optimality, noise reconstruction characteristics and signal modeling applied to microscopy images with low signal-tonoise ratio (SNR). Our approach has several advantages over traditional denoising methods, since it can under-sample, recover and denoise images simultaneously. We demonstrate with simulated and practical experiments on fluorescence image data that thanks to CS denoising we can obtain images with similar or increased SNR while still being able to reduce exposition times.
Learning minimal abstractions Static analyses are generally parametrized by an abstraction which is chosen from a family of abstractions. We are interested in flexible families of abstractions with many parameters, as these families can allow one to increase precision in ways tailored to the client without sacrificing scalability. For example, we consider k-limited points-to analyses where each call site and allocation site in a program can have a different k value. We then ask a natural question in this paper: What is the minimal (coarsest) abstraction in a given family which is able to prove a set of queries? In addressing this question, we make the following two contributions: (i) We introduce two machine learning algorithms for efficiently finding a minimal abstraction; and (ii) for a static race detector backed by a k-limited points-to analysis, we show empirically that minimal abstractions are actually quite coarse: It suffices to provide context/object sensitivity to a very small fraction (0.4-2.3%) of the sites to yield equally precise results as providing context/object sensitivity uniformly to all sites.
On the empirical rate-distortion performance of Compressive Sensing Compressive Sensing (CS) is a new paradigm in signal acquisition and compression that has been attracting the interest of the signal compression community. When it comes to image compression applications, it is relevant to estimate the number of bits required to reach a specific image quality. Although several theoretical results regarding the rate-distortion performance of CS have been published recently, there are not many practical image compression results available. The main goal of this paper is to carry out an empirical analysis of the rate-distortion performance of CS in image compression. We analyze issues such as the minimization algorithm used and the transform employed, as well as the trade-off between number of measurements and quantization error. From the experimental results obtained we highlight the potential and limitations of CS when compared to traditional image compression methods.
Case-Based Reasoning for Cash-Flow Forecasting using Fuzzy Retrieval Case-Based Reasoning (CBR) simulates human way of solving problems as it solves a new problem using a successful past experience applied to a similar problem. In this paper we describe a CBR system that performs forecasts for cash flow accounts. Forecasting cash flows to a certain degree of accuracy, is an important aspect of a Working Capital decision support system. Working Capital (WC) management decisions reflect a choice among different options on how to arrange the cash flow. The decision establishes an actual event in the cash flow and that means that one needs to envision the consequences of such a decision. Hence, forecasting cash flows accurately can minimize losses caused by usually unpredictable events. Cash flows are usually forecasted by a combination of different techniques enhanced by human experts' feelings about the future, which are grounded in past experience. That is what makes the use of the CBR paradigm the proper choice. Advantages of a CBR system over other Artificial Intelligence techniques are associated to knowledge acquisition, knowledge representation, reuse, updating and justification. An important step in developing a CBR system is the retrieval of similar cases. The proposed system makes use of fuzzy integrals to calculate the synthetic evaluations of similarities between cases instead of the usual weighted mean.
Fuzzy OWA model for information security risk management One of the methods for information security risk assessment is the substantiated choice and realization of countermeasures against threats. A situational fuzzy OWA model of a multicriteria decision making problem concerning the choice of countermeasures for reducing information security risks is proposed. The proposed model makes it possible to modify the associated weights of criteria based on the information entropy with respect to the aggregation situation. The advantage of the model is the continuous improvement of the weights of the criteria and the aggregation of experts’ opinions depending on the parameter characterizing the aggregation situation.
1.050431
0.05
0.026417
0.025426
0.01402
0.001927
0.000247
0.00004
0.000005
0
0
0
0
0
Statistical timing analysis using levelized covariance propagation Variability in process parameters is making accurate timing analysis of nanoscale integrated circuits an extremely challenging task. In this paper, we propose a new algorithm for statistical timing analysis using levelized covariance propagation (LCP). The algorithm simultaneously considers the impact of random placement of dopants (which makes every transistor in a die independent in terms of threshold voltage) and the spatial correlation of the process parameters such as channel length, transistor width and oxide thickness due to the intra-die variations. It also considers the signal correlation due to reconvergent paths in the circuit. Results on several benchmark circuits in 70 nm technology show an average of 0.21 % and 1.07 % errors in mean and the standard deviation, respectively, in timing analysis using the proposed technique compared to the Monte-Carlo analysis.
Statistical ordering of correlated timing quantities and its application for path ranking Correct ordering of timing quantities is essential for both timing analysis and design optimization in the presence of process variation, because timing quantities are no longer a deterministic value, but a distribution. This paper proposes a novel metric, called tiered criticalities, which guarantees to provide a unique order for a set of correlated timing quantities while properly taking into account full process space coverage. Efficient algorithms are developed to compute this metric, and its effectiveness on path ranking for at-speed testing is also demonstrated.
Fast statistical timing analysis of latch-controlled circuits for arbitrary clock periods Latch-controlled circuits have a remarkable advantage in timing performance as process variations become more relevant for circuit design. Existing methods of statistical timing analysis for such circuits, however, still need improvement in runtime and their results should be extended to provide yield information for any given clock period. In this paper, we propose a method combining a simplified iteration and a graph transformation algorithm. The result of this method is in a parametric form so that the yield for any given clock period can easily be evaluated. The graph transformation algorithm handles the constraints from nonpositive loops effectively, completely avoiding the heuristics used in other existing methods. Therefore the accuracy of the timing analysis is well maintained. Additionally, the proposed method is much faster than other existing methods. Especially for large circuits it offers about 100 times performance improvement in timing verification.
Statistical timing verification for transparently latched circuits through structural graph traversal Level-sensitive transparent latches are widely used in high-performance sequential circuit designs. Under process variations, the timing of a transparently latched circuit will adapt random delays at runtime due to time borrowing. The central problem to determine the timing yield is to compute the probability of the presence of a positive cycle in the latest latch timing graph. Existing algorithms are either optimistic since cycles are omitted or require iterations that cannot be polynomially bounded. In this paper, we present the first algorithm to compute such probability based on block-based statistical timing analysis that, first, covers all cycles through a structural graph traversal, and second, terminates within a polynomial number of statistical ¿sum¿ and ¿max¿ operations. Experimental results confirm that the proposed approach is effective and efficient.
A probabilistic analysis of pipelined global interconnect under process variations The main thesis of this paper is to perform a reliability based performance analysis for a shared latch inserted global interconnect under uncertainty. We first put forward a novel delay metric named DMA for estimation of interconnect delay probability density function considering process variations. Without considerable loss in accuracy, DMA can achieve high computational efficiency even in a large space of random variables. We then propose a comprehensive probabilistic methodology for sampling transfers, on a shared latch inserted global interconnect, that highly improves the reliability of the interconnect. Improvements up to 125% are observed in the reliability when compared to deterministic sampling approach. It is also shown that dual phase clocking scheme for pipelined global interconnect is able to meet more stringent timing constraints due to its lower latency.
Parameterized block-based non-gaussian statistical gate timing analysis As technology scales down, timing verification of digital integrated circuits becomes an increasingly challenging task due to the gate and wire variability. Therefore, statistical timing analysis (denoted by σTA) is becoming unavoidable. This paper introduces a new framework for performing statistical gate timing analysis for non-Gaussian sources of variation in block-based σTA. First, an approach is described to approximate a variational RC-π load by using a canonical first-order model. Next, an accurate variation-aware gate timing analysis based on statistical input transition, statistical gate timing library, and statistical RC-π load is presented. Finally, to achieve the aforementioned objective, a statistical effective capacitance calculation method is presented. Experimental results show an average error of 6% for gate delay and output transition time with respect to the Monte Carlo simulation with 104 samples while the runtime is nearly two orders of magnitude shorter.
Criticality computation in parameterized statistical timing Chips manufactured in 90 nm technology have shown large parametric variations, and a worsening trend is predicted. These parametric variations make circuit optimization difficult since different paths are frequency-limiting in different parts of the multi-dimensional process space. Therefore, it is desirable to have a new diagnostic metric for robust circuit optimization. This paper presents a novel algorithm to compute the criticality probability of every edge in the timing graph of a design with linear complexity in the circuit size. Using industrial benchmarks, we verify the correctness of our criticality computation via Monte Carlo simulation. We also show that for large industrial designs with 442,000 gates, our algorithm computes all edge criticalities in less than 160 seconds
Variational delay metrics for interconnect timing analysis In this paper we develop an approach to model interconnect delay under process variability for timing analysis and physical design optimization. The technique allows for closed-form computation of interconnect delay probability density functions (PDFs) given variations in relevant process parameters such as linewidth, metal thickness, and dielectric thickness. We express the resistance and capacitance of a line as a linear function of random variables and then use these to compute circuit moments. Finally, these variability-aware moments are used in known closed-form delay metrics to compute interconnect delay PDFs. We compare the approach to SPICE based Monte Carlo simulations and report an error in mean and standard deviation of delay of 1% and 4% on average, respectively.
Statistical critical path analysis considering correlations Critical path analysis is always an important task in timing verification. For todays nanometer IC technologies, process variations have a significant impact on circuit performance. The variability can change the criticality of long paths (Gattiker et al., 2002). Therefore, statistical approaches should be incorporated in critical path analysis. In this paper, we present two novel techniques that can efficiently evaluate path criticality under statistical non-linear delay models. They are integrated into a block-based statistical timing tool with the capability of handling arbitrary correlations from manufacturing process dependence and also path sharing. Experiments on ISCAS85 benchmarks as well as industrial circuits prove both accuracy and efficiency of these techniques.
Computing discrepancies of Smolyak quadrature rules In recent years, Smolyak quadrature rules (also called quadratures on hyperbolic cross pointsor sparse grids) have gained interest as a possible competitor to number theoretic quadraturesfor high dimensional problems. A standard way of comparing the quality of multivariate quadratureformulas consists in computing their L 2 -discrepancy. Especially for larger dimensions, suchcomputations are a highly complex task. In this paper we develop a fast recursive algorithmfor computing the L 2...
Wavelet-domain compressive signal reconstruction using a Hidden Markov Tree model Compressive sensing aims to recover a sparse or compressible signal from a small set of projections onto random vectors; conventional so- lutions involve linear programming or greedy algorithms that can be computationally expensive. Moreover, these recovery techniques are generic and assume no particular structure in the signal asi de from sparsity. In this paper, we propose a new algorithm that enables fast recovery of piecewise smooth signals, a large and useful class of signals whose sparse wavelet expansions feature a distinct "con- nected tree" structure. Our algorithm fuses recent results on iterative reweighted ℓ1-norm minimization with the wavelet Hidden Markov Tree model. The resulting optimization-based solver outperforms the standard compressive recovery algorithms as well as previously proposed wavelet-based recovery algorithms. As a bonus, the al- gorithm reduces the number of measurements necessary to achieve low-distortion reconstruction.
Possibility Theory in Constraint Satisfaction Problems: Handling Priority, Preference and Uncertainty In classical Constraint Satisfaction Problems (CSPs) knowledge is embedded in a set of hard constraints, each one restricting the possible values of a set of variables. However constraints in real world problems are seldom hard, and CSP's are often idealizations that do not account for the preference among feasible solutions. Moreover some constraints may have priority over others. Lastly, constraints may involve uncertain parameters. This paper advocates the use of fuzzy sets and possibility theory as a realistic approach for the representation of these three aspects. Fuzzy constraints encompass both preference relations among possible instanciations and priorities among constraints. In a Fuzzy Constraint Satisfaction Problem (FCSP), a constraint is satisfied to a degree (rather than satisfied or not satisfied) and the acceptability of a potential solution becomes a gradual notion. Even if the FCSP is partially inconsistent, best instanciations are provided owing to the relaxation of some constraints. Fuzzy constraints are thus flexible. CSP notions of consistency and k-consistency can be extended to this framework and the classical algorithms used in CSP resolution (e.g., tree search and filtering) can be adapted without losing much of their efficiency. Most classical theoretical results remain applicable to FCSPs. In the paper, various types of constraints are modelled in the same framework. The handling of uncertain parameters is carried out in the same setting because possibility theory can account for both preference and uncertainty. The presence of uncertain parameters lead to ill-defined CSPs, where the set of constraints which defines the problem is not precisely known.
Compressive Acquisition of Dynamic Scenes Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models infeasible. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, from which the image frames are then reconstructed. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to considerably lower the compressive measurement rate considerably. We validate our approach with a range of experiments including classification experiments that highlight the effectiveness of the proposed approach.
Mesh denoising via L0 minimization We present an algorithm for denoising triangulated models based on L0 minimization. Our method maximizes the flat regions of the model and gradually removes noise while preserving sharp features. As part of this process, we build a discrete differential operator for arbitrary triangle meshes that is robust with respect to degenerate triangulations. We compare our method versus other anisotropic denoising algorithms and demonstrate that our method is more robust and produces good results even in the presence of high noise.
1.026977
0.043636
0.043636
0.022545
0.018309
0.009301
0.003046
0.000558
0.000049
0.000002
0
0
0
0
Fuzzy assessment method on sampling survey analysis Developing a well-designed market survey questionnaire will ensure that surveyors get the information they need about the target market. Traditional sampling survey via questionnaire, which rates item by linguistic variables, possesses the vague nature. It has difficulty in reflecting interviewee's incomplete and uncertain thought. Therefore, if we can use fuzzy sense to express the degree of interviewee's feelings based on his own concept, the sampling result will be closer to interviewee's real thought. In this study, we propose the fuzzy sense on sampling survey to do aggregated assessment analysis. The proposed fuzzy assessment method on sampling survey analysis is easily to assess the sampling survey and evaluate the aggregative evaluation.
Statistical confidence intervals for fuzzy data The application of fuzzy sets theory to statistical confidence intervals for unknown fuzzy parameters is proposed in this paper by considering fuzzy random variables. In order to obtain the belief degrees under the sense of fuzzy sets theory, we transform the original problem into the optimization problems. We provide the computational procedure to solve the optimization problems. A numerical example is also provided to illustrate the possible application of fuzzy sets theory to statistical confidence intervals.
Generalization of the group decision making using fuzzy sets theory for evaluating the rate of aggregative risk in software development This study is to propose an algorithm of the group decision makers with crisp or fuzzy weights to tackle the rate of aggregative risk in software development in fuzzy circumstances by fuzzy sets theory during any phase of the life cycle. The proposed algorithm is more flexible and useful than the ones we have presented before CH.-M. Lee, Fuzzy Sets and Systems 79 (3) (1996) 323-336; 80 (3) (1996) 261-271), since the weights against decision makers are considered. (C) 1999 Elsevier Science Inc. All rights reserved.
Applying fuzzy set theory to evaluate the rate of aggregative risk in software development The purpose of this study is not only to build a structure model of risk in software development but also evaluate the rate of aggregative risk by fuzzy set theory. While evaluating the rate of aggregative risk, we may adjust the weights or grades of the factors until we can accept it. We also show that the rate of aggregative risk is reasonable.
Incorporating filtering techniques in a fuzzy linguistic multi-agent model for information gathering on the web In (Computing with Words, Wiley, New York, 2001, p. 251; Soft Comput. 6 (2002) 320; Fuzzy Logic and The Internet, Physica-Verlag, Springer, Wurzburg, Berlin, 2003) we presented different fuzzy linguistic multi-agent models for helping users in their information gathering processes on the Web. In this paper we describe a new fuzzy linguistic multi-agent model that incorporates two information filtering techniques in its structure: a content-based filtering agent and a collaborative filtering agent. Both elements are introduced to increase the information filtering possibilities of multi-agent system on the Web and, in such a way, to improve its retrieval issues.
On type-2 fuzzy sets and their t-norm operations In this paper, we discuss t-norm extension operations of general binary operation for fuzzy true values on a linearly ordered set, with a unit interval and a real number set as special cases. On the basis of it, t-norm operations of type-2 fuzzy sets and properties of type-2 fuzzy numbers are discussed.
Tuning The Matching Function For A Threshold Weighting Semantics In A Linguistic Information Retrieval System Information retrieval is an activity that attempts to produce documents that better fulfill user information needs. To achieve this activity an information retrieval system uses matching functions that specify the degree of relevance of a document with respect to a user query. Assuming linguistic-weighted queries we present a new linguistic matching function for a threshold weighting semantics that is defined using a 2-tuple fuzzy linguistic approach (Herrera F, Martinez L. IEEE Trans Fuzzy Syst 2000;8:746-752). This new 2-tuple linguistic matching function can be interpreted as a tuning of that defined in "Modelling the Retrieval Process for an Information Retrieval System Using an Ordinal Fuzzy Linguistic Approach" (Heffera-Viedma E. J Am Soc Inform Sci Technol 2001;52:460-475). We show that it simplifies the processes of computing in the retrieval activity, avoids the loss of precision in final results, and, consequently, can help to improve the users' satisfaction. (c) 2005 Wiley Periodicals, Inc.
A Fuzzy Linguistic Irs Model Based On A 2-Tuple Fuzzy Linguistic Approach Information Retrieval Systems (IRSs) based on an ordinal fuzzy linguistic approach present some problems of loss of information and lack of precision when working with discrete linguistic expression domains or when applying approximation operations in the symbolic aggregation methods. In this paper, we present a new IRS model based on the 2-tuple fuzzy linguistic approach, which allows us to overcome the problems of ordinal fuzzy linguistic IRSs and improve their performance.
Decider: A fuzzy multi-criteria group decision support system Multi-criteria group decision making (MCGDM) aims to support preference-based decision over the available alternatives that are characterized by multiple criteria in a group. To increase the level of overall satisfaction for the final decision across the group and deal with uncertainty in decision process, a fuzzy MCGDM process (FMP) model is established in this study. This FMP model can also aggregate both subjective and objective information under multi-level hierarchies of criteria and evaluators. Based on the FMP model, a fuzzy MCGDM decision support system (called Decider) is developed, which can handle information expressed in linguistic terms, boolean values, as well as numeric values to assess and rank a set of alternatives within a group of decision makers. Real applications indicate that the presented FMP model and the Decider software are able to effectively handle fuzziness in both subjective and objective information and support group decision-making under multi-level criteria with a higher level of satisfaction by decision makers.
Similarity Measures Between Type-2 Fuzzy Sets In this paper, we give similarity measures between type-2 fuzzy sets and provide the axiom definition and properties of these measures. For practical use, we show how to compute the similarities between Gaussian type-2 fuzzy sets. Yang and Shih's [22] algorithm, a clustering method based on fuzzy relations by beginning with a similarity matrix, is applied to these Gaussian type-2 fuzzy sets by beginning with these similarities. The clustering results are reasonable consisting of a hierarchical tree according to different levels.
A Metasemantics to Refine Fuzzy If-Then Rules Fuzzy if-then rules are used to represent fuzzymodels. Real data is later used to tune the model.Usually this forces a modification of the initiallinguistic terms of the linguistic variables used forthe model. Such modifications may lead to a loss ininterpretability of the rules. In this paper we suggestusing a form of multiresolution to tune the rules, byintroducing more linguistic terms in the regions ofthe universe of discourse where this is needed.Starting with triangular shaped linguistic terms andrecalling that symmetric triangles are first degreesplines, a form of multiresolution is readilyobtained, supporting a better accuracy of the model.It is shown that by using the linguistic modifiers"very" and "more or less" as well as themetasemantic modifiers "between" and "from - to"the interpretability of the original rule may bepreserved.
Alternative Logics for Approximate Reasoning in Expert Systems: A Comparative Study In this paper we report the results of an empirical study to compare eleven alternative logics for approximate reasoning in expert systems. The several “compositional inference” axiom systems (described below) were used in an expert knowledge-based system. The quality of the system outputs—fuzzy linguistic phrases—were compared in terms of correctness and precision (non-vagueness).
TEMPORAL AND SPATIAL SCALING FOR STEREOSCOPIC VIDEO COMPRESSION In stereoscopic video, it is well-known that compression efficiency can be improved, without sacrificing PSNR, by predicting one view from the other. Moreover, additional gain can be achieved by subsampling one of the views, since the Human Visual System can perceive high frequency information from the other view. In this work, we propose subsampling of one of the views by scaling its temporal rate and/or spatial size at regular intervals using a real-time stereoscopic H.264/AVC codec, and assess the subjective quality of the resulting videos using DSCQS test methodology. We show that stereoscopic videos can be coded at a rate about 1.2 times that of monoscopic videos with little visual quality degradation.
SPECO: Stochastic Perturbation based Clock tree Optimization considering temperature uncertainty Modern computing system applications or workloads can bring significant non-uniform temperature gradient on-chip, and hence can cause significant temperature uncertainty during clock-tree synthesis. Existing designs of clock-trees have to assume a given time-invariant worst-case temperature map but cannot deal with a set of temperature maps under a set of workloads. For robust clock-tree synthesis considering temperature uncertainty, this paper presents a new problem formulation: Stochastic PErturbation based Clock Optimization (SPECO). In SPECO algorithm, one nominal clock-tree is pre-synthesized with determined merging points. The impact from the stochastic temperature variation is modeled by perturbation (or small physical displacement) of merging points to offset the induced skews. Because the implementation cost is reduced but the design complexity is increased, the determination of optimal positions of perturbed merging points requires a computationally efficient algorithm. In this paper, one Non-Monte-Carlo (NMC) method is deployed to generate skew and skew variance by one-time analysis when a set of stochastic temperature maps is already provided. Moreover, one principal temperature-map analysis is developed to reduce the design complexity by clustering correlated merging points based on the subspace of the correlation matrix. As a result, the new merging points can be efficiently determined level by level with both skew and its variance reduced. The experimental results show that our SPECO algorithm can effectively reduce the clock-skew and its variance under a number of workloads with minimized wire-length overhead and computational cost.
1.03899
0.052625
0.025
0.015112
0.006042
0.000404
0.000154
0.00006
0.000029
0.000007
0
0
0
0
Classification with sparse grids using simplicial basis functions Recently we presented a new approach [20] to the classificationproblem arising in data mining. It is based on the regularizationnetwork approach but in contrast to other methods, which employansatz functions associated to data points, we use a grid in theusually high-dimensional feature space for the minimizationprocess. To cope with the curse of dimensionality, we employ sparsegrids [52]. Thus, only O(h_n^{-1} n^{d-1}) instead of O(h_n^{-d})grid points and unknowns are involved. Here d denotes the dimensionof the feature space and h_n = 2^{-n} gives the mesh size. We usethe sparse grid combination technique [30] where the classificationproblem is discretized and solved on a sequence of conventionalgrids with uniform mesh sizes in each dimension. The sparse gridsolution is then obtained by linear combination.The method computes a nonlinear classifier but scales onlylinearly with the number of data points and is well suited for datamining applications where the amount of data is very large, butwhere the dimension of the feature space is moderately high. Incontrast to our former work, where d-linear functions were used, wenow apply linear basis functions based on a simplicialdiscretization. This allows to handle more dimensions and thealgorithm needs less operations per data point. We further extendthe method to so-called anisotropic sparse grids, where nowdifferent a-priori chosen mesh sizes can be used for thediscretization of each attribute. This can improve the run time ofthe method and the approximation results in the case of data setswith different importance of the attributes.We describe the sparse grid combination technique for theclassification problem, give implementational details and discussthe complexity of the algorithm. It turns out that the methodscales linearly with the number of given data points. Finally wereport on the quality of the classifier built by our new method ondata sets with up to 14 dimensions. We show that our new methodachieves correctness rates which are competitive to those of thebest existing methods.
Orthogonal polynomial expansions on sparse grids. We study the orthogonal polynomial expansion on sparse grids for a function of d variables in a weighted L2 space. Two fast algorithms are developed for computing the orthogonal polynomial expansion and evaluating a linear combination of orthogonal polynomials on sparse grids by combining the fast cosine transform, the fast transforms between the Chebyshev orthogonal polynomial basis and the orthogonal polynomial basis for the weighted L2 space, and a fast algorithm of computing hierarchically structured basis functions. The total number of arithmetic operations used in both algorithms is O(nlogd+1n) where n is the highest polynomial degree in one dimension. The exponential convergence of the approximation for the analytic function is investigated. Specifically, we show the sub-exponential convergence for analytic functions and moreover we prove the approximation order is optimal for the Chebyshev orthogonal polynomial expansion. We furthermore establish the fully exponential convergence for functions with a somewhat stronger analytic assumption. Numerical experiments confirm the theoretical results and demonstrate the efficiency and stability of the proposed algorithms.
The exponent of discrepancy of sparse grids is at least 2.1933 We study bounds on the exponents of sparse grids for L 2‐discrepancy and average case d‐dimensional integration with respect to the Wiener sheet measure. Our main result is that the minimal exponent of sparse grids for these problems is bounded from below by 2.1933. This shows that sparse grids provide a rather poor exponent since, due to Wasilkowski and Woźniakowski [16], the minimal exponent of L 2‐discrepancy of arbitrary point sets is at most 1.4778. The proof of the latter, however, is non‐constructive. The best known constructive upper bound is still obtained by a particular sparse grid and equal to 2.4526....
Stochastic Solutions for the Two-Dimensional Advection-Diffusion Equation In this paper, we solve the two-dimensional advection-diffusion equation with random transport velocity. The generalized polynomial chaos expansion is employed to discretize the equation in random space while the spectral hp element method is used for spatial discretization. Numerical results which demonstrate the convergence of generalized polynomial chaos are presented. Specifically, it appears that the fast convergence rate in the variance is the same as that of the mean solution in the Jacobi-chaos unlike the Hermite-chaos. To this end, a new model to represent compact Gaussian distributions is also proposed.
Explicit cost bounds of algorithms for multivariate tensor product problems We study multivariate tensor product problems in the worst case and average casesettings. They are defined on functions of d variables. For arbitrary d, we provideexplicit upper bounds on the costs of algorithms which compute an &quot;-approximationto the solution. The cost bounds are of the form(c(d) + 2) fi 1`fi 2 + fi 3ln 1=&quot;d \Gamma 1" fi 4 (d\Gamma1) `1&quot;" fi 5:Here c(d) is the cost of one function evaluation (or one linear functional evaluation),and fi i "s do not...
A Multilevel Stochastic Collocation Algorithm for Optimization of PDEs with Uncertain Coefficients In this work, we apply the MG/OPT framework to a multilevel-in-sample-space discretization of optimization problems governed by PDEs with uncertain coefficients. The MG/OPT algorithm is a template for the application of multigrid to deterministic PDE optimization problems. We employ MG/OPT to exploit the hierarchical structure of sparse grids in order to formulate a multilevel stochastic collocation algorithm. The algorithm is provably first-order convergent under standard assumptions on the hierarchy of discretized objective functions as well as on the optimization routines used as pre- and postsmoothers. We present explicit bounds on the total number of PDE solves and an upper bound on the error for one V-cycle of the MG/OPT algorithm applied to a linear quadratic control problem. We provide numerical results that confirm the theoretical bound on the number of PDE solves and show a dramatic reduction in the total number of PDE solves required to solve these optimization problems when compared with standard optimization routines applied to a fixed sparse-grid discretization of the same problem.
Principal manifold learning by sparse grids In this paper, we deal with the construction of lower-dimensional manifolds from high-dimensional data which is an important task in data mining, machine learning and statistics. Here, we consider principal manifolds as the minimum of a regularized, non-linear empirical quantization error functional. For the discretization we use a sparse grid method in latent parameter space. This approach avoids, to some extent, the curse of dimension of conventional grids like in the GTM approach. The arising non-linear problem is solved by a descent method which resembles the expectation maximization algorithm. We present our sparse grid principal manifold approach, discuss its properties and report on the results of numerical experiments for one-, two- and three-dimensional model problems.
A Survey of Projection-Based Model Reduction Methods for Parametric Dynamical Systems Numerical simulation of large-scale dynamical systems plays a fundamental role in studying a wide range of complex physical phenomena; however, the inherent large-scale nature of the models often leads to unmanageable demands on computational resources. Model reduction aims to reduce this computational burden by generating reduced models that are faster and cheaper to simulate, yet accurately represent the original large-scale system behavior. Model reduction of linear, nonparametric dynamical systems has reached a considerable level of maturity, as reflected by several survey papers and books. However, parametric model reduction has emerged only more recently as an important and vibrant research area, with several recent advances making a survey paper timely. Thus, this paper aims to provide a resource that draws together recent contributions in different communities to survey the state of the art in parametric model reduction methods. Parametric model reduction targets the broad class of problems for which the equations governing the system behavior depend on a set of parameters. Examples include parameterized partial differential equations and large-scale systems of parameterized ordinary differential equations. The goal of parametric model reduction is to generate low-cost but accurate models that characterize system response for different values of the parameters. This paper surveys state-of-the-art methods in projection-based parametric model reduction, describing the different approaches within each class of methods for handling parametric variation and providing a comparative discussion that lends insights to potential advantages and disadvantages in applying each of the methods. We highlight the important role played by parametric model reduction in design, control, optimization, and uncertainty quantification-settings that require repeated model evaluations over different parameter values.
A Data-Driven Stochastic Method for Elliptic PDEs with Random Coefficients We propose a data-driven stochastic method (DSM) to study stochastic partial differential equations (SPDEs) in the multiquery setting. An essential ingredient of the proposed method is to construct a data-driven stochastic basis under which the stochastic solutions to the SPDEs enjoy a compact representation for a broad range of forcing functions and/or boundary conditions. Our method consists of offline and online stages. A data-driven stochastic basis is computed in the offline stage using the Karhunen-Lo` eve (KL) expansion. A two-level preconditioning optimization approach and a randomized SVD algorithm are used to reduce the offline computational cost. In the online stage, we solve a relatively small number of coupled deterministic PDEs by projecting the stochastic solution into the data-driven stochastic basis constructed offline. Compared with a generalized polynomial chaos method (gPC), the ratio of the computational complexities between DSM (online stage) and gPC is of order O((m/N-p)(2)). Here m and N-p are the numbers of elements in the basis used in DSM and gPC, respectively. Typically we expect m << N-p when the effective dimension of the stochastic solution is small. A timing model, which takes into account the offline computational cost of DSM, is constructed to demonstrate the efficiency of DSM. Applications of DSM to stochastic elliptic problems show considerable computational savings over traditional methods even with a small number of queries. We also provide a method for an a posteriori error estimate and error correction.
Algorithm 890: Sparco: A Testing Framework for Sparse Reconstruction Sparco is a framework for testing and benchmarking algorithms for sparse reconstruction. It includes a large collection of sparse reconstruction problems drawn from the imaging, compressed sensing, and geophysics literature. Sparco is also a framework for implementing new test problems and can be used as a tool for reproducible research. Sparco is implemented entirely in Matlab, and is released as open-source software under the GNU Public License.
On sparse representations in arbitrary redundant bases The purpose of this contribution is to generalize some recent results on sparse representations of signals in redundant bases. The question that is considered is the following: given a matrix A of dimension (n,m) with mn and a vector b=Ax, find a sufficient condition for b to have a unique sparsest representation x as a linear combination of columns of A. Answers to this question are known when A is the concatenation of two unitary matrices and either an extensive combinatorial search is performed or a linear program is solved. We consider arbitrary A matrices and give a sufficient condition for the unique sparsest solution to be the unique solution to both a linear program or a parametrized quadratic program. The proof is elementary and the possibility of using a quadratic program opens perspectives to the case where b=Ax+e with e a vector of noise or modeling errors.
A survey on fuzzy relational equations, part I: classification and solvability Fuzzy relational equations play an important role in fuzzy set theory and fuzzy logic systems, from both of the theoretical and practical viewpoints. The notion of fuzzy relational equations is associated with the concept of "composition of binary relations." In this survey paper, fuzzy relational equations are studied in a general lattice-theoretic framework and classified into two basic categories according to the duality between the involved composite operations. Necessary and sufficient conditions for the solvability of fuzzy relational equations are discussed and solution sets are characterized by means of a root or crown system under some specific assumptions.
Fuzzy homomorphisms of algebras In this paper we consider fuzzy relations compatible with algebraic operations, which are called fuzzy relational morphisms. In particular, we aim our attention to those fuzzy relational morphisms which are uniform fuzzy relations, called uniform fuzzy relational morphisms, and those which are partially uniform F-functions, called fuzzy homomorphisms. Both uniform fuzzy relations and partially uniform F-functions were introduced in a recent paper by us. Uniform fuzzy relational morphisms are especially interesting because they can be conceived as fuzzy congruences which relate elements of two possibly different algebras. We give various characterizations and constructions of uniform fuzzy relational morphisms and fuzzy homomorphisms, we establish certain relationships between them and fuzzy congruences, and we prove homomorphism and isomorphism theorems concerning them. We also point to some applications of uniform fuzzy relational morphisms.
Fuzzifying images using fuzzy wavelet denoising Fuzzy connected filters were recently introduced as an extension of connected filters within the fuzzy set framework. They rely on the representation of the image gray levels by fuzzy quantities, which are suitable to represent imprecision usually contained in images. No robust construction method of these fuzzy images has been introduced so far. In this paper we propose a generic method to fuzzify a crisp image in order to explicitly take imprecision on grey levels into account. This method is based on the conversion of statistical noise present in an image, which cannot be directly represented by fuzzy sets, into a denoising imprecision. The detectability of constant gray level structures in these fuzzy images is also discussed.
1.100165
0.1
0.033493
0.008352
0.002387
0.00027
0.000095
0.000045
0.00002
0.000001
0
0
0
0
What Makes a Professional Video? A Computational Aesthetics Approach Understanding the characteristics of high-quality professional videos is important for video classification, video quality measurement, and video enhancement. A professional video is good not only for its interesting story but also for its high visual quality. In this paper, we study what makes a professional video from the perspective of aesthetics. We discuss how a professional video is created and correspondingly design a variety of features that distinguish professional videos from amateur ones. We study general aesthetics features that are applied to still photos and extend them to videos. We design a variety of features that are particularly relevant to videos. We examined the performance of these features in the problem of professional and amateur video classification. Our experiments show that with these features, 97.3% professional and amateur shot classification accuracy rate is achieved on our own data set and 91.2% professional video detection rate is achieved on a public professional video set. Our experiments also show that the features that are particularly for videos are shown most effective for this task.
Audiovisual Quality Components. The perceived quality of an audiovisual sequence is heavily influenced by both the quality of the audio and the quality of the video. The question then arises as to the relative importance of each factor and whether a regression model predicting audiovisual quality can be devised that is generally applicable.
A New EDI-based Deinterlacing Algorithm In this paper, we propose a new deinterlacing algorithm using edge direction field, edge parity, and motion expansion scheme. The algorithm consists of an EDI (edge dependent interpolation)-based intra-field deinterlacing and inter-field deinterlacing that uses block-based motion detection. Most of the EDI algorithms use pixel-by-pixel or block-by-block distance to estimate the edge direction, which results in many annoying artifacts. We propose the edge direction field, and estimate an interpolation direction using the field and SAD (sum of absolute differences) values. The edge direction field is a set of edge orientations and their gradient magnitudes. The proposed algorithm assumes that a local minimum around the gradient edge field is most probably the true edge direction. Our approach provides good visual results in various kinds of edges (horizontal, narrow and weak). And we propose a new temporal interpolation method based on block motion detection. The algorithm works reliably in scenes which have very fast moving objects and low SNR signals. Experimental results on various data set show that the proposed algorithm works well for the diverse kinds of sequences and reconstructs flicker-free details in the static region1.
Study on visual discomfort induced by stimulus movement at fixed depth on stereoscopic displays using shutter glasses
Impact Of Mobile Devices And Usage Location On Perceived Multimedia Quality We explore the quality impact when audiovisual content is delivered to different mobile devices. Subjects were shown the same sequences on five different mobile devices and a broadcast quality television. Factors influencing quality ratings include video resolution, viewing distance, and monitor size. Analysis shows how subjects' perception of multimedia quality differs when content is viewed on different mobile devices. In addition, quality ratings from laboratory and simulated living room sessions were statistically equivalent.
Symmetrical frame discard method for 3D video over IP networks Three dimensional (3D) video is expected to be an important application for broadcast and IP streaming services. One of the main limitations for the transmission of 3D video over IP networks is network bandwidth mismatch due to the large size of 3D data, which causes fatal decoding errors and mosaic-like damage. This paper presents a novel selective frame discard method to address the problem. The main idea of the proposed method is the symmetrical discard of the two dimensional (2D) video frame and the depth map frame, which enables the efficient utilization of the network bandwidth. Also, the frames to be discarded are selected after additional consideration of the playback deadline, the network bandwidth, and the inter-frame dependency relationship within a group of pictures (GOP). The simulation results demonstrate that the proposed method enhances the media quality of 3D video streaming even in the case of bad network conditions. The proposed method is expected to be used for Internet protocol (IP) based 3D video streaming applications such as 3D IPTV.
Visibility Of Individual Packet Losses In Mpeg-2 Video The ability of a human to visually detect whether a packet has been lost during the transport of compressed video depends heavily on the location of the packet loss and the content or the video. In this paper, we explore when humans can visually detect the error caused by individual packet losses. Using the results of a subjective test based on 1080 packet losses in 72 minutes of video, we design a classifier that uses objective factors extracted from the video to predict to visibility of each error. Our classifier achieves over 93% accuracy.
Analysis Of Packet Loss For Compressed Video: Does Burst-Length Matter? Video communication is often afflicted by various forms of losses' such as packet loss over the Internet. This paper examines the question of whether the packet loss pattern, and in particular the burst length, is important for accurately estimating the expected mean-squared error distortion. Specifically, we (1) verify that the loss pattern does have a significant effect on the resulting distortion, (2) explain why a loss pattern, for example a burst loss, generally produces a larger distortion than an equal number of isolated losses, and (3) propose a model that accurately estimates the expected distortion by explicitly accounting for the loss pattern, inter-frame error propagation, and the correlation between error frames. The accuracy of the proposed model is validated with JVT/H.26L coded video and previous frame concealment, where for most sequences the total distortion is predicted to within +/-0.3 dB for burst loss of length two packets, as compared to prior models which underestimate the distortion by about 1.5 dB. Furthermore, as the burst length increases, our prediction is within +/-0.7 dB, while prior models degrade and underestimate the distortion by over 3 d13.
On capacity-quality tradeoffs in HTTP adaptive streaming over LTE networks The growing consumer demand for mobile video services is one of the key drivers of the evolution of new wireless multimedia solutions requiring exploration of new ways to optimize future wireless networks for video services towards delivering enhanced capacity and quality of experience (QoE). One of these key video enhancing solutions is HTTP adaptive streaming (HAS), which has recently been spreading as a form of internet video delivery and is expected to be deployed more broadly over the next few years. This paper summarizes our proposed capacity and QoE evaluation methodology for HAS services based on the notion of rebuffering percentage as the central indicator of user QoE, and associated empirical data based on simulations conducted over 3GPP LTE networks. Further details on our work can be found in the papers listed in the references.
Parameterized block-based statistical timing analysis with non-Gaussian parameters, nonlinear delay functions Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do not consider explicit gate delay dependence on process parameters (Liou, et al., 2001), (Orshansky, et al., 2002), (Devgan, et al., 2003), (Agarwal, et al., 2003) or restrict analysis to linear Gaussian parameters only (Visweswariah, et al., 2004), (Chang, et al., 2003). Here the authors extended the capabilities of parameterized block-based statistical STA (Visweswariah, et al., 2004) to handle nonlinear function of delays and non-Gaussian parameters, while retaining maximum efficiency of processing linear Gaussian parameters. The novel technique improves accuracy in predicting circuit timing characteristics and retains such benefits of parameterized block-based statistical STA as an incremental mode of operation, computation of criticality probabilities and sensitivities to process parameter variations. The authors' technique was implemented in an industrial statistical timing analysis tool. The experiments with large digital blocks showed both efficiency and accuracy of the proposed technique.
Bus interconnection networks In bus interconnection networks every bus provides a communication medium between a set of processors. These networks are modeled by hypergraphs where vertices represent the processors and edges represent the buses. We survey the results obtained on the construction methods that connect a large number of processors in a bus network with given maximum processor degree Δ, maximum bus size r , and network diameter D . (In hypergraph terminology this problem is known as the (Δ, D , r )-hypergraph problem.) The problem for point-to-point networks (the case r = 2) has been extensively studied in the literature. As a result, several families of networks have been proposed. Some of these point-to-point networks can be used in the construction of bus networks. One approach is to consider the dual of the network. We survey some families of bus networks obtained in this manner. Another approach is to view the point-to-point networks as a special case of the bus networks and to generalize the known constructions to bus networks. We provide a summary of the tools developed in the theory of hypergraphs and directed hypergraphs to handle this approach.
Increasing energy efficiency in sensor networks: blue noise sampling and non-convex matrix completion The energy cost of a sensor network is dominated by the data acquisition and communication cost of individual sensors. At each sampling instant it is unnecessary to sample and communicate the data at all sensors since the data is highly redundant. We find that, if only (random) subset of the sensors acquires and transmits the sample values, it is possible to estimate the sample values at all the sensors under certain realistic assumptions. Since only a subset of all the sensors is active at each sampling instant, the energy cost of the network is reduced over time. When the sensor nodes are assumed to lie on a regular rectangular grid, the problem can be recast as a low-rank matrix completion problem. Current theoretical work on matrix completion relies on purely random sampling strategies and convex estimation algorithms. In this work, we will empirically show that better reconstruction results are obtained when more sophisticated sampling schemes are used followed by non-convex matrix completion algorithms. We find that the proposed approach gives surprisingly good results.
A method for multiple attribute decision making with incomplete weight information under uncertain linguistic environment The multi-attribute decision making problems are studied, in which the information about the attribute values take the form of uncertain linguistic variables. The concept of deviation degree between uncertain linguistic variables is defined, and ideal point of uncertain linguistic decision making matrix is also defined. A formula of possibility degree for the comparison between uncertain linguistic variables is proposed. Based on the deviation degree and ideal point of uncertain linguistic variables, an optimization model is established, by solving the model, a simple and exact formula is derived to determine the attribute weights where the information about the attribute weights is completely unknown. For the information about the attribute weights is partly known, another optimization model is established to determine the weights, and then to aggregate the given uncertain linguistic decision information, respectively. A method based on possibility degree is given to rank the alternatives. Finally, an illustrative example is also given.
A Machine Learning Approach to Personal Pronoun Resolution in Turkish.
1.203153
0.203153
0.203153
0.203153
0.101576
0.000822
0.000318
0.000094
0.000025
0
0
0
0
0
A gradient-based alternating minimization approach for optimization of the measurement matrix in compressive sensing In this paper the problem of optimization of the measurement matrix in compressive (also called compressed) sensing framework is addressed. In compressed sensing a measurement matrix that has a small coherence with the sparsifying dictionary (or basis) is of interest. Random measurement matrices have been used so far since they present small coherence with almost any sparsifying dictionary. However, it has been recently shown that optimizing the measurement matrix toward decreasing the coherence is possible and can improve the performance. Based on this conclusion, we propose here an alternating minimization approach for this purpose which is a variant of Grassmannian frame design modified by a gradient-based technique. The objective is to optimize an initially random measurement matrix to a matrix which presents a smaller coherence than the initial one. We established several experiments to measure the performance of the proposed method and compare it with those of the existing approaches. The results are encouraging and indicate improved reconstruction quality, when utilizing the proposed method.
Bayesian compressive sensing for cluster structured sparse signals In traditional framework of compressive sensing (CS), only sparse prior on the property of signals in time or frequency domain is adopted to guarantee the exact inverse recovery. Other than sparse prior, structures on the sparse pattern of the signal have also been used as an additional prior, called model-based compressive sensing, such as clustered structure and tree structure on wavelet coefficients. In this paper, the cluster structured sparse signals are investigated. Under the framework of Bayesian compressive sensing, a hierarchical Bayesian model is employed to model both the sparse prior and cluster prior, then Markov Chain Monte Carlo (MCMC) sampling is implemented for the inference. Unlike the state-of-the-art algorithms which are also taking into account the cluster prior, the proposed algorithm solves the inverse problem automatically-prior information on the number of clusters and the size of each cluster is unknown. The experimental results show that the proposed algorithm outperforms many state-of-the-art algorithms.
A* orthogonal matching pursuit: Best-first search for compressed sensing signal recovery Compressed sensing is a developing field aiming at the reconstruction of sparse signals acquired in reduced dimensions, which make the recovery process under-determined. The required solution is the one with minimum @?"0 norm due to sparsity, however it is not practical to solve the @?"0 minimization problem. Commonly used techniques include @?"1 minimization, such as Basis Pursuit (BP) and greedy pursuit algorithms such as Orthogonal Matching Pursuit (OMP) and Subspace Pursuit (SP). This manuscript proposes a novel semi-greedy recovery approach, namely A* Orthogonal Matching Pursuit (A*OMP). A*OMP performs A* search to look for the sparsest solution on a tree whose paths grow similar to the Orthogonal Matching Pursuit (OMP) algorithm. Paths on the tree are evaluated according to a cost function, which should compensate for different path lengths. For this purpose, three different auxiliary structures are defined, including novel dynamic ones. A*OMP also incorporates pruning techniques which enable practical applications of the algorithm. Moreover, the adjustable search parameters provide means for a complexity-accuracy trade-off. We demonstrate the reconstruction ability of the proposed scheme on both synthetically generated data and images using Gaussian and Bernoulli observation matrices, where A*OMP yields less reconstruction error and higher exact recovery frequency than BP, OMP and SP. Results also indicate that novel dynamic cost functions provide improved results as compared to a conventional choice.
A Probabilistic and RIPless Theory of Compressed Sensing This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution $F$; it includes all standard models—e.g., Gaussian, frequency measurements—discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution $F$ obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) to hold near the sparsity level in question, nor a random model for the signal. As an example, the paper shows that a signal with $s$ nonzero entries can be faithfully recovered from about $s \log n$ Fourier coefficients that are contaminated with noise.
Fast image recovery using variable splitting and constrained optimization We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.
Greed is good: algorithmic results for sparse approximation This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms.
Process and environmental variation impacts on ASIC timing With each semiconductor process node, the impacts on performance of environmental and semiconductor process variations become a larger portion of the cycle time of the product. Simple guard-banding for these effects leads to increased product development times and uncompetitive products. In addition, traditional static timing methodologies are unable to cope with the large number of permutations of process, voltage, and temperature corners created by these independent sources of variation. In this paper we will discuss the sources of variation; by introducing the concepts of systematic inter-die variation, systematic intra-die variation and intra-die random variation. We will show that by treating these forms of variations differently, we can achieve design closure with less guard-banding than traditional methods.
Logical structure of fuzzy IF-THEN rules This paper provides a logical basis for manipulation with fuzzy IF-THEN rules. Our theory is wide enough and it encompasses not only finding a conclusion by means of the compositional rule of inference due to Lotfi A. Zadeh but also other kinds of approximate reasoning methods, e.g., perception-based deduction, provided that there exists a possibility to characterize them within a formal logical system. In contrast with other approaches employing variants of multiple-valued first-order logic, the approach presented here employs fuzzy type theory of V. Novák which has sufficient expressive power to present the essential concepts and results in a compact, elegant and justifiable form. Within the effectively formalized representation developed here, based on a complete logical system, it is possible to reconstruct numerous well-known properties of CRI-related fuzzy inference methods, albeit not from the analytic point of view as usually presented, but as formal derivations of the logical system employed. The authors are confident that eventually all relevant knowledge about fuzzy inference methods based on fuzzy IF-THEN rule bases will be represented, formalized and backed up by proof within the well-founded logical representation presented here. An immediate positive consequence of this approach is that suddenly all elements of a fuzzy inference method based on fuzzy IF-THEN rules are ‘first class citizens´ of the representation: there are clear, logically founded definitions for fuzzy IF-THEN rule bases to be consistent, complete, or independent.
Karhunen-Loève approximation of random fields by generalized fast multipole methods KL approximation of a possibly instationary random field a(ω, x) ∈ L2(Ω,dP; L∞(D)) subject to prescribed meanfield Ea(x) = ∫Ω, a (ω x) dP(ω) and covariance Va(x,x') = ∫Ω(a(ω, x) - Ea(x))(a(ω, x') - Ea(x')) dP(ω) in a polyhedral domain D ⊂ Rd is analyzed. We show how for stationary covariances Va(x,x') = ga(|x - x'|) with ga(z) analytic outside of z = 0, an M-term approximate KL-expansion aM(ω, x) of a(ω, x) can be computed in log-linear complexity. The approach applies in arbitrary domains D and for nonseparable covariances Ca. It involves Galerkin approximation of the KL eigenvalue problem by discontinuous finite elements of degree p ≥ 0 on a quasiuniform, possibly unstructured mesh of width h in D, plus a generalized fast multipole accelerated Krylov-Eigensolver. The approximate KL-expansion aM(X, ω) of a(x, ω) has accuracy O(exp(-bM1/d)) if ga is analytic at z = 0 and accuracy O(M-k/d) if ga is Ck at zero. It is obtained in O(MN(logN)b) operations where N = O(h-d).
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.
Is Gauss Quadrature Better than Clenshaw-Curtis? We compare the convergence behavior of Gauss quadrature with that of its younger brother, Clenshaw-Curtis. Seven-line MATLAB codes are presented that implement both methods, and experiments show that the supposed factor-of-2 advantage of Gauss quadrature is rarely realized. Theorems are given to explain this effect. First, following O'Hara and Smith in the 1960s, the phenomenon is explained as a consequence of aliasing of coefficients in Chebyshev expansions. Then another explanation is offered based on the interpretation of a quadrature formula as a rational approximation of $\log((z+1)/(z-1))$ in the complex plane. Gauss quadrature corresponds to Padé approximation at $z=\infty$. Clenshaw-Curtis quadrature corresponds to an approximation whose order of accuracy at $z=\infty$ is only half as high, but which is nevertheless equally accurate near $[-1,1]$.
Practical RDF schema reasoning with annotated semantic web data Semantic Web data with annotations is becoming available, being YAGO knowledge base a prominent example. In this paper we present an approach to perform the closure of large RDF Schema annotated semantic web data using standard database technology. In particular, we exploit several alternatives to address the problem of computing transitive closure with real fuzzy semantic data extracted from YAGO in the PostgreSQL database management system. We benchmark the several alternatives and compare to classical RDF Schema reasoning, providing the first implementation of annotated RDF schema in persistent storage.
Fuzzy concepts and formal methods: some illustrative examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However, some system problems, particularly those drawn from the information systems (IS) problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic toolkit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, and soft pre-conditions and system requirements as a series of linguistically quantified propositions.
Subjective Quality Metric For 3d Video Services Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.
1.1
0.1
0.1
0.014286
0.008333
0.000617
0
0
0
0
0
0
0
0
Variation-aware performance verification using at-speed structural test and statistical timing Meeting the tight performance specifications mandated by the customer is critical for contract manufactured ASICs. To address this, at speed test has been employed to detect subtle delay failures in manufacturing. However, the increasing process spread in advanced nanometer ASICs poses considerable challenges to predicting hardware performance from timing models. Performance verification in the presence of process variation is difficult because the critical path is no longer unique. Different paths become frequency limiting in different process corners. In this paper, we present a novel variation-aware method based on statistical timing to select critical paths for structural test. Node criticalities are computed to determine the probabilities of different circuit nodes being on the critical path across process variation. Moreover, path delays are projected into different process corners using their linear delay function forms. Experimental results for three multimillion gate ASICs demonstrate the effectiveness of our methods.
A Framework for Scalable Postsilicon Statistical Delay Prediction Under Process Variations Due to increased variability trends in nanoscale integrated circuits, statistical circuit analysis and optimization has become essential. While statistical timing analysis has an important role to play in this process, it is equally important to develop die-specific delay prediction techniques using postsilicon measurements. We present a novel method for postsilicon delay analysis. We gather data from a small number of on-chip test structures, and combine this information with presilicon statistical timing analysis to obtain narrow die-specific timing probability density function (PDF). Experimental results show that for the benchmark suite being considered, taking all parameter variations into consideration, our approach can obtain a PDF whose standard deviation is 79.0% smaller, on average, than the statistical timing analysis result. The accuracy of the method defined by our metric is 99.6% compared to Monte Carlo simulation. The approach is scalable to smaller test structure overheads and can still produce acceptable results.
Statistical Static Timing Analysis Considering Process Variation Model Uncertainty Increasing variability in modern manufacturing processes makes it important to predict the yields of chip designs at early design stage. In recent years, a number of statistical static timing analysis (SSTA) and statistical circuit optimization techniques have emerged to quickly estimate the design yield and perform robust optimization. These statistical methods often rely on the availability of statistical process variation models whose accuracy, however, is severely hampered by the limitations in test structure design, test time, and various sources of inaccuracy inevitably incurred in process characterization. To consider model characterization inaccuracy, we present an efficient importance sampling based optimization framework that can translate the uncertainty in process models to the uncertainty in circuit performance, thus offering the desired statistical best/worst case circuit analysis capability accounting for the unavoidable complexity in process characterization. Furthermore, our new technique provides valuable guidance to process characterization. Examples are included to demonstrate the application of our general analysis framework under the context of SSTA.
Use of statistical timing analysis on real designs A vast literature has been published on statistical static timing analysis (SSTA), its motivations, its different implementations and their runtime/accuracy trade-offs. However, very limited literature exists on the applicability and the usage models of this new technology on real designs. This work focuses on the use of SSTA in real designs and its practical benefits and limitations over the traditional design flow. The authors introduce two new metrics to drive the optimization: skew criticality and aggregate sensitivity. Practical benefits of SSTA are demonstrated for clock tree analysis, and correct modeling of on-chip-variations. The use of SSTA to cover the traditional corner analysis and to drive optimization is also discussed. Results are reported on three designs implemented on a 90nm technology
Practical Variation-Aware Interconnect Delay and Slew Analysis for Statistical Timing Verification Interconnects constitute a dominant source of circuit delay for modern chip designs. The variations of critical dimensions in modern VLSI technologies lead to variability in interconnect performance that must be fully accounted for in timing verification. However, handling a multitude of inter-die/intra-die variations and assessing their impacts on circuit performance can dramatically complicate the timing analysis. In this paper, a practical interconnect delay and slew analysis technique is presented to facilitate efficient evaluation of wire performance variability. By harnessing a collection of computationally efficient procedures and closed-form formulas, process and input signal variations are directly mapped into the variability of the output delay and slew. Since our approach produces delay and slew expressions parameterized in the underlying process variations, it can be harnessed to enable statistical timing analysis while considering important statistical correlations. Our experimental results have indicated that the presented analysis is accurate regardless of location of sink nodes and it is also robust over a wide range of process variations
Non-Gaussian statistical timing analysis using second-order polynomial fitting In the nanometer manufacturing region, process variation causes significant uncertainty for circuit performance verifi- cation. Statistical static timing analysis (SSTA) is thus de- veloped to estimate timing distribution under process vari- ation. However, most of the existing SSTA techniques have difficulty in handling the non-Gaussian variation distribu- tion and non-linear dependency of delay on variation sources. To solve such a problem, in this paper, we first propose a new method to approximate the max operation of two non- Gaussian random variables through second-order polyno- mial fitting. We then present new non-Gaussian SSTA algo- rithms under two types of variational delay models: quadratic model and semi-quadratic model (i.e., quadratic model with- out crossing terms). All atomic operations (such as max and sum) of our algorithms are performed by closed-form formu- las, hence they scale well for large designs. Experimental results show that compared to the Monte-Carlo simulation, our approach predicts the mean, standard deviation, and skewness within 1%, 1%, and 5% error, respectively. Our approach is more accurate and also 20x faster than the most recent method for non-Gaussian and nonlinear SSTA.
Parameterized block-based statistical timing analysis with non-Gaussian parameters, nonlinear delay functions Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do not consider explicit gate delay dependence on process parameters (Liou, et al., 2001), (Orshansky, et al., 2002), (Devgan, et al., 2003), (Agarwal, et al., 2003) or restrict analysis to linear Gaussian parameters only (Visweswariah, et al., 2004), (Chang, et al., 2003). Here the authors extended the capabilities of parameterized block-based statistical STA (Visweswariah, et al., 2004) to handle nonlinear function of delays and non-Gaussian parameters, while retaining maximum efficiency of processing linear Gaussian parameters. The novel technique improves accuracy in predicting circuit timing characteristics and retains such benefits of parameterized block-based statistical STA as an incremental mode of operation, computation of criticality probabilities and sensitivities to process parameter variations. The authors' technique was implemented in an industrial statistical timing analysis tool. The experiments with large digital blocks showed both efficiency and accuracy of the proposed technique.
Statistical timing analysis with correlated non-Gaussian parameters using independent component analysis We propose a scalable and efficient parameterized block-based statistical static timing analysis algorithm incorporating both Gaussian and non-Gaussian parameter distributions, capturing spatial correlations using a grid-based model. As a preprocessing step, we employ independent component analysis to transform the set of correlated non-Gaussian parameters to a basis set of parameters that are statistically independent, and principal components analysis to orthogonalize the Gaussian parameters. The procedure requires minimal input information: given the moments of the variational parameters, we use a Pade approximation-based moment matching scheme to generate the distributions of the random variables representing the signal arrival times, and preserve correlation information by propagating arrival times in a canonical form. For the ISCAS89 benchmark circuits, as compared to Monte Carlo simulations, we obtain average errors of 0.99% and 2.05%, respectively, in the mean and standard deviation of the circuit delay. For a circuit with |G| gates and a layout with g spatial correlation grids, the complexity of our approach is O(g|G|)
A Study of Variance Reduction Techniques for Estimating Circuit Yields The efficiency of several variance reduction techniques (in particular, importance sampling, stratified sampling, and control variates) are studied with respect to their application in estimating circuit yields. This study suggests that one essentially has to have a good approximation of the region of acceptability in order to achieve significant variance reduction. Further, all the methods considered are based, either explicitly or implicity, on the use of a model. The control variate method appears to be more practical for implementation in a general purpose statistical circuit analysis program. Stratified sampling is the most simple to implement, but yields only very modest reductions in the variance of the yield estimator. Lastly, importance sampling is very useful when there are few parameters and the yield is very high or very low; however, a good practical technique for its implementation, in general, has not been found.
Fast Variational Analysis of On-Chip Power Grids by Stochastic Extended Krylov Subspace Method This paper proposes a novel stochastic method for analyzing the voltage drop variations of on-chip power grid networks, considering lognormal leakage current variations. The new method, called StoEKS, applies Hermite polynomial chaos to represent the random variables in both power grid networks and input leakage currents. However, different from the existing orthogonal polynomial-based stochastic simulation method, extended Krylov subspace (EKS) method is employed to compute variational responses from the augmented matrices consisting of the coefficients of Hermite polynomials. Our contribution lies in the acceleration of the spectral stochastic method using the EKS method to fast solve the variational circuit equations for the first time. By using the reduction technique, the new method partially mitigates increased circuit-size problem associated with the augmented matrices from the Galerkin-based spectral stochastic method. Experimental results show that the proposed method is about two-order magnitude faster than the existing Hermite PC-based simulation method and many order of magnitudes faster than Monte Carlo methods with marginal errors. StoEKS is scalable for analyzing much larger circuits than the existing Hermit PC-based methods.
Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization The matrix rank minimization problem has applications in many fields, such as system identification, optimal control, low-dimensional embedding, etc. As this problem is NP-hard in general, its convex relaxation, the nuclear norm minimization problem, is often solved instead. Recently, Ma, Goldfarb and Chen proposed a fixed-point continuation algorithm for solving the nuclear norm minimization problem (Math. Program., doi: 10.1007/s10107-009-0306-5, 2009). By incorporating an approximate singular value decomposition technique in this algorithm, the solution to the matrix rank minimization problem is usually obtained. In this paper, we study the convergence/recoverability properties of the fixed-point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving affinely constrained matrix rank minimization problems are reported.
A conceptual framework for fuzzy query processing—A step toward very intelligent database systems This paper is concerned with techniques for fuzzy query processing in a database system. By a fuzzy query we mean a query which uses imprecise or fuzzy predicates (e.g. AGE = “VERY YOUNG”, SALARY = “MORE OR LESS HIGH”, YEAR-OF-EMPLOYMENT = “RECENT”, SALARY ⪢ 20,000, etc.). As a basis for fuzzy query processing, a fuzzy retrieval system based on the theory of fuzzy sets and linguistic variables is introduced. In our system model, the first step in processing fuzzy queries consists of assigning meaning to fuzzy terms (linguistic values), of a term-set, used for the formulation of a query. The meaning of a fuzzy term is defined as a fuzzy set in a universe of discourse which contains the numerical values of a domain of a relation in the system database.
Bit precision analysis for compressed sensing This paper studies the stability of some reconstruction algorithms for compressed sensing in terms of the bit precision. Considering the fact that practical digital systems deal with discretized signals, we motivate the importance of the total number of accurate bits needed from the measurement outcomes in addition to the number of measurements. It is shown that if one uses a 2 k times n Vandermonde matrix with roots on the unit circle as the measurement matrix, O(lscr + k log n/k) bits of precision per measurement are sufficient to reconstruct a k-sparse signal x isin Ropfn with dynamic range (i.e., the absolute ratio between the largest and the smallest nonzero coefficients) at most 2lscr within lscr bits of precision, hence identifying its correct support. Finally, we obtain an upper bound on the total number of required bits when the measurement matrix satisfies a restricted isometry property, which is in particular the case for random Fourier and Gaussian matrices. For very sparse signals, the upper bound on the number of required bits for Vandermonde matrices is shown to be better than this general upper bound.
Intersection sets in AG(n, q) and a characterization of the hyperbolic quadric in PG(3,q) Bruen proved that if A is a set of points in AG(n, q) which intersects every hyperplane in at least t points, then |A| ≥ (n+t-1)(q-1) + 1, leaving as an open question how good such bound is. Here we prove that, up to a trivial case, if t ((n - 1)(q - 1) + 1)/2, then Bruen's bound can be improved. If t is equal to the integer part of ((n - 1)(q - 1) + 1)/2, then there are some examples which attain such a lower bound. Somehow, this suggests the following combinatorial characterization: if a set S of points in PG(3, q) meets every affine plane in at least q - 1 points and is of minimum size with respect to this property, then S is a hyperbolic quadric.
1.024765
0.022222
0.022222
0.016667
0.006178
0.002397
0.000648
0.000153
0.000028
0.000003
0
0
0
0
Variability-Driven Buffer Insertion Considering Correlations Abstract-In this work we investigate the buffer insertion problem under process variations. Sub 100-nm fabrication process causes significant variations on many design parameters. We propose a probabilistic buffer insertion method assuming variations on both interconnect and buffer parameters and consider their correlations due to common sources of variation. Our proposed method is compatible with the more accurate D2M wire-delay model, as well as the Elmore delay model. In addition, a probabilistic pruning criterion is proposed to evaluate potential solutions, while considering their correlations. Experimental results demonstrate that considering correlations using the more accurate D2M delay model results in meeting the timing constraint with an average probability of 0.63. However probabilistic buffer insertion ignoring correlations and deterministic methods, meet the timing constraint with an average probability of 0.25 and 0.19 respectively.
Fast buffer insertion considering process variations Advanced process technologies call for a proactive consideration of process variations in design to ensure high parametric timing yield. Despite of its popular use in almost any high performance IC designs nowadays, however, buffer insertion has not gained enough attention in addressing this issue. In this paper, we propose a novel algorithm for buffer insertion to consider process variations. The major contribution of this work is two-fold: (1) an efficient technique to handle correlated process variations under nonlinear operations; (2) a provable transitive closure pruning rule that makes linear complexity variation-aware pruning possible. The proposed techniques enable an efficient implementation of variation-aware buffer insertion. Compared to an existing algorithm considering process variations, our algorithm achieves more than 25x speed-up. We also show that compared to the conventional deterministic approach, the proposed buffer insertion algorithm considering correlated process variations improves the parametric timing yield by more than 15%.
Simultaneous Buffer Insertion and Wire Sizing Considering Systematic CMP Variation and Random Leff Variation This paper presents extensions of the dynamic-programming (DP) framework to consider buffer insertion and wire-sizing under effects of process variation. We study the effectiveness of this approach to reduce timing impact caused by chemical-mechanical planarization (CMP)-induced systematic variation and random Leff process variation in devices. We first present a quantitative study on the impact of CMP to interconnect parasitics. We then introduce a simple extension to handle CMP effects in the buffer insertion and wire sizing problem by simultaneously considering fill insertion (SBWF). We also tackle the same problem but with random Leff process variation (vSBWF) by incorporating statistical timing into the DP framework. We develop an efficient yet accurate heuristic pruning rule to approximate the computationally expensive statistical problem. Experiments under conservative assumption on process variation show that SBWF algorithm obtains 1.6% timing improvement over the variation-unaware solution. Moreover, our statistical vSBWF algorithm results in 43.1% yield improvement on average. We also show that our approaches have polynomial time complexity with respect to the net-size. The proposed extensions on the DP framework is orthogonal to other power/area-constrained problems under the same framework, which has been extensively studied in the literature
Parametric yield maximization using gate sizing based on efficient statistical power and delay gradient computation With the increased significance of leakage power and performance variability, the yield of a design is becoming constrained both by power and performance limits, thereby significantly complicating circuit optimization. In this paper, we propose a new optimization method for yield optimization under simultaneous leakage power and performance limits. The optimization approach uses a novel leakage power and performance analysis that is statistical in nature and considers the correlation between leakage power and performance to enable accurate computation of circuit yield under power and delay limits. We then propose a new heuristic approach to incrementally compute the gradient of yield with respect to gate sizes in the circuit with high efficiency and accuracy. We then show how this gradient information can be effectively used by a non-linear optimizer to perform yield optimization. We consider both inter-die and intra-die variations with correlated and random components. The proposed approach is implemented and tested and we demonstrate up to 40% yield improvement compared to a deterministically optimized circuit.
Interval-valued reduced order statistical interconnect modeling We show how recent advances in the handling of correlated interval representations of range uncertainty can be used to predict the im- pact of statistical manufacturing variations on linear interconnect. We represent correlated statistical variations in RLC parameters as sets of correlated intervals, and show how classical model order reduction methods - AWE and PRIMA - can be re-targeted to com- pute interval-valued, rather than scalar-valued reductions. By ap- plying a statisticalinterpretation and sampling to the resulting com- pact interval-valued model, we can efficiently estimate the impact of variations on the original circuit. Results show the technique can predict mean delay with errors between 5-10%, for correlated RLC parameter variations up to 35%
Parameterized block-based statistical timing analysis with non-Gaussian parameters, nonlinear delay functions Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do not consider explicit gate delay dependence on process parameters (Liou, et al., 2001), (Orshansky, et al., 2002), (Devgan, et al., 2003), (Agarwal, et al., 2003) or restrict analysis to linear Gaussian parameters only (Visweswariah, et al., 2004), (Chang, et al., 2003). Here the authors extended the capabilities of parameterized block-based statistical STA (Visweswariah, et al., 2004) to handle nonlinear function of delays and non-Gaussian parameters, while retaining maximum efficiency of processing linear Gaussian parameters. The novel technique improves accuracy in predicting circuit timing characteristics and retains such benefits of parameterized block-based statistical STA as an incremental mode of operation, computation of criticality probabilities and sensitivities to process parameter variations. The authors' technique was implemented in an industrial statistical timing analysis tool. The experiments with large digital blocks showed both efficiency and accuracy of the proposed technique.
Semantics of Context-Free Languages Meaning" may be assigned to a string in a context-free language by defining "at- tributes" of the symbols in a derivation tree for that string. The attributes can be de- fined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are "synthe- sized", i.e., defined solely in terms of attributes of the descendants of the correspond- ing nonterminal symbol, while other attributes are "inherited", i.e., defined in terms of attributes of the ancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature. A simple technique for specifying the "meaning" of languages defined by context-free grammars is introduced in Section 1 of this paper, and its basic mathematical properties are investigated in Sections 2 and 3. An example which indicates how the technique can be applied to the formal definition of programming languages is described in Section 4, and finally, Section 5 contains a somewhat biased comparison of the present method to other known techniques for semantic definition. The discussion in this paper is oriented primarily towards programming languages, but the same methods appear to be relevant also in the study of natural languages. 1. Introduction. Let us st/ppose that we want to give a precise defini- tion of binary notation for numbers. This can be done in many ways, and in this section we want to consider a manner o f definition which can be gen- eralized so that the meaning of other notations can be expressed in the same way. One such way to define binary notation is to base a definition on
Universally composable security: a new paradigm for cryptographic protocols We propose a novel paradigm for defining security of cryptographic protocols, called universally composable security. The salient property of universally composable definitions of security is that they guarantee security even when a secure protocol is composed of an arbitrary set of protocols, or more generally when the protocol is used as a component of an arbitrary system. This is an essential property for maintaining security of cryptographic protocols in complex and unpredictable environments such as the Internet. In particular, universally composable definitions guarantee security even when an unbounded number of protocol instances are executed concurrently in an adversarially controlled manner, they guarantee non-malleability with respect to arbitrary protocols, and more. We show how to formulate universally composable definitions of security for practically any cryptographic task. Furthermore, we demonstrate that practically any such definition can be realized using known techniques, as long as only a minority of the participants are corrupted. We then proceed to formulate universally composable definitions of a wide array of cryptographic tasks, including authenticated and secure communication, key-exchange, public-key encryption, signature, commitment, oblivious transfer, zero knowledge and more. We also make initial steps towards studying the realizability of the proposed definitions in various settings.
Computing with words in decision making: foundations, trends and prospects Computing with Words (CW) methodology has been used in several different environments to narrow the differences between human reasoning and computing. As Decision Making is a typical human mental process, it seems natural to apply the CW methodology in order to create and enrich decision models in which the information that is provided and manipulated has a qualitative nature. In this paper we make a review of the developments of CW in decision making. We begin with an overview of the CW methodology and we explore different linguistic computational models that have been applied to the decision making field. Then we present an historical perspective of CW in decision making by examining the pioneer papers in the field along with its most recent applications. Finally, some current trends, open questions and prospects in the topic are pointed out.
Completeness and consistency conditions for learning fuzzy rules The completeness and consistency conditions were introduced in order to achieve acceptable concept recognition rules. In real problems, we can handle noise-affected examples and it is not always possible to maintain both conditions. Moreover, when we use fuzzy information there is a partial matching between examples and rules, therefore the consistency condition becomes a matter of degree. In this paper, a learning algorithm based on soft consistency and completeness conditions is proposed. This learning algorithm combines in a single process rule and feature selection and it is tested on different databases. (C) 1998 Elsevier Science B.V. All rights reserved.
On proactive perfectly secure message transmission This paper studies the interplay of network connectivity and perfectly secure message transmission under the corrupting influence of a Byzantine mobile adversary that may move from player to player but can corrupt no more than t players at any given time. It is known that, in the stationary adversary model where the adversary corrupts the same set of t players throughout the protocol, perfectly secure communication among any pair of players is possible if and only if the underlying synchronous network is (2t + 1)-connected. Surprisingly, we show that (2t + 1)-connectivity is sufficient (and of course, necessary) even in the proactive (mobile) setting where the adversary is allowed to corrupt different sets of t players in different rounds of the protocol. In other words, adversarial mobility has no effect on the possibility of secure communication. Towards this, we use the notion of a Communication Graph, which is useful in modelling scenarios with adversarial mobility. We also show that protocols for reliable and secure communication proposed in [15] can be modified to tolerate the mobile adversary. Further these protocols are round-optimal if the underlying network is a collection of disjoint paths from the sender S to receiver R.
Real-Time Convex Optimization in Signal Processing This article shows the potential for convex optimization methods to be much more widely used in signal processing. In particular, automatic code generation makes it easier to create convex optimization solvers that are made much faster by being designed for a specific problem family. The disciplined convex programming framework that has been shown useful in transforming problems to a standard form...
Fuzzy Concepts and Formal Methods: A Fuzzy Logic Toolkit for Z It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. We provide a summary of a toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations. We also provide some examples of the laws which establishes an isomorphism between the extended notation presented here and conventional Z when applied to boolean sets and relations.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.24
0.24
0.068571
0.00869
0.003373
0.000091
0
0
0
0
0
0
0
0
Statistical confidence intervals for fuzzy data The application of fuzzy sets theory to statistical confidence intervals for unknown fuzzy parameters is proposed in this paper by considering fuzzy random variables. In order to obtain the belief degrees under the sense of fuzzy sets theory, we transform the original problem into the optimization problems. We provide the computational procedure to solve the optimization problems. A numerical example is also provided to illustrate the possible application of fuzzy sets theory to statistical confidence intervals.
Fuzzy assessment method on sampling survey analysis Developing a well-designed market survey questionnaire will ensure that surveyors get the information they need about the target market. Traditional sampling survey via questionnaire, which rates item by linguistic variables, possesses the vague nature. It has difficulty in reflecting interviewee's incomplete and uncertain thought. Therefore, if we can use fuzzy sense to express the degree of interviewee's feelings based on his own concept, the sampling result will be closer to interviewee's real thought. In this study, we propose the fuzzy sense on sampling survey to do aggregated assessment analysis. The proposed fuzzy assessment method on sampling survey analysis is easily to assess the sampling survey and evaluate the aggregative evaluation.
Evaluating new product development performance by fuzzy linguistic computing New product development (NPD) is indeed the cornerstone for companies to maintain and enhance the competitive edge. However, developing new products is a complex and risky decision-making process. It involves a search of the environment for opportunities, the generation of project options, and the evaluation by different experts of multiple attributes, both qualitative and quantitative. To perceive and to measure effectively the capability of NPD are real challenging tasks for business managers. This paper presents a 2-tuple fuzzy linguistic computing approach to deal with heterogeneous information and information loss problems during the processes of subjective evaluation integration. The proposed method which is based on the group decision-making scenario to assist business managers to measure the performance of NPD manipulates the heterogeneous integration processes and avoids the information loss effectively. Finally, its feasibility is demonstrated by the result of NPD performance evaluation for a high-technology company in Taiwan.
Incorporating filtering techniques in a fuzzy linguistic multi-agent model for information gathering on the web In (Computing with Words, Wiley, New York, 2001, p. 251; Soft Comput. 6 (2002) 320; Fuzzy Logic and The Internet, Physica-Verlag, Springer, Wurzburg, Berlin, 2003) we presented different fuzzy linguistic multi-agent models for helping users in their information gathering processes on the Web. In this paper we describe a new fuzzy linguistic multi-agent model that incorporates two information filtering techniques in its structure: a content-based filtering agent and a collaborative filtering agent. Both elements are introduced to increase the information filtering possibilities of multi-agent system on the Web and, in such a way, to improve its retrieval issues.
Web shopping expert using new interval type-2 fuzzy reasoning Finding a product with high quality and reasonable price online is a difficult task due to uncertainty of Web data and queries. In order to handle the uncertainty problem, the Web Shopping Expert, a new type-2 fuzzy online decision support system, is proposed. In the Web Shopping Expert, a fast interval type-2 fuzzy method is used to directly use all rules with type-1 fuzzy sets to perform type-2 fuzzy reasoning efficiently. The parameters of type-2 fuzzy sets are optimized by a least square method. The Web Shopping Expert based on the interval type-2 fuzzy inference system provides reasonable decisions for online users.
Applying multi-objective evolutionary algorithms to the automatic learning of extended Boolean queries in fuzzy ordinal linguistic information retrieval systems The performance of information retrieval systems (IRSs) is usually measured using two different criteria, precision and recall. Precision is the ratio of the relevant documents retrieved by the IRS in response to a user's query to the total number of documents retrieved, whilst recall is the ratio of the number of relevant documents retrieved to the total number of relevant documents for the user's query that exist in the documentary database. In fuzzy ordinal linguistic IRSs (FOLIRSs), where extended Boolean queries are used, defining the user's queries in a manual way is usually a complex task. In this contribution, our interest is focused on the automatic learning of extended Boolean queries in FOLIRSs by means of multi-objective evolutionary algorithms considering both mentioned performance criteria. We present an analysis of two well-known general-purpose multi-objective evolutionary algorithms to learn extended Boolean queries in FOLIRSs. These evolutionary algorithms are the non-dominated sorting genetic algorithm (NSGA-II) and the strength Pareto evolutionary algorithm (SPEA2).
A Model Based On Fuzzy Linguistic Information To Evaluate The Quality Of Digital Libraries The Web is changing the information access processes and it is one of the most important information media. Thus, the developments on the Web are having a great influence over the developments on others information access instruments as digital libraries. As the development of digital libraries is to satisfy user need, user satisfaction is essential for the success of a digital library. The aim of this paper is to present a model based on fuzzy linguistic information to evaluate the quality of digital libraries. The quality evaluation of digital libraries is defined using users' perceptions on the quality of digital services provided through their Websites. We assume a fuzzy linguistic modeling to represent the users' perception and apply automatic tools of fuzzy computing with words based on the LOWA and LWA operators to compute global quality evaluations of digital libraries. Additionally, we show an example of application of this model where three Spanish academic digital libraries are evaluated by fifty users.
Dealing with heterogeneous information in engineering evaluation processes Before selecting a design for a large engineering system several design proposals are evaluated studying different key aspects. In such a design assessment process, different criteria need to be evaluated, which can be of both of a quantitative and qualitative nature, and the knowledge provided by experts may be vague and/or incomplete. Consequently, the assessment problems may include different types of information (numerical, linguistic, interval-valued). Experts are usually forced to provide knowledge in the same domain and scale, resulting in higher levels of uncertainty. In this paper, we propose a flexible framework that can be used to model the assessment problems in different domains and scales. A fuzzy evaluation process in the proposed framework is investigated to deal with uncertainty and manage heterogeneous information in engineering evaluation processes.
A fuzzy multi-criteria group decision making framework for evaluating health-care waste disposal alternatives Nowadays, as in all other organizations, the amount of waste generated in the health-care institutions is rising due to their extent of service. Medical waste management is a common problem of developing countries including Turkey, which are becoming increasingly conscious that health-care wastes require special treatment. Accordingly, one of the most important problems encountered in Istanbul, the most crowded metropolis of Turkey, is the disposal of health-care waste (HCW) from health-care institutions. Evaluating HCW disposal alternatives, which considers the need to trade-off multiple conflicting criteria with the involvement of a group of experts, is a highly important multi-criteria group decision making problem. The inherent imprecision and vagueness in criteria values concerning HCW disposal alternatives justify the use of fuzzy set theory. This paper presents a fuzzy multi-criteria group decision making framework based on the principles of fuzzy measure and fuzzy integral for evaluating HCW treatment alternatives for Istanbul. In group decision making problems, aggregation of expert opinions is essential for properly conducting the evaluation process. In this study, the ordered weighted averaging (OWA) operator is used to aggregate decision makers' opinions. Economic, technical, environmental and social criteria and their related sub-criteria are employed to assess HCW treatment alternatives, namely ''incineration'', ''steam sterilization'', ''microwave'', and ''landfill''. A comparative analysis is presented using another classical operator to aggregate decision makers' preferences.
Uncertain linguistic aggregation operators based approach to multiple attribute group decision making under uncertain linguistic environment In this paper, two uncertain linguistic aggregation operators called uncertain linguistic ordered weighted averaging (ULOWA) operator and uncertain linguistic hybrid aggregation (ULHA) operator are proposed. An approach to multiple attribute group decision making with uncertain linguistic information is developed based on the ULOWA and the ULHA operators. Finally, a practical application of the developed approach to the problem of evaluating university faculty for tenure and promotion is given.
On intuitionistic gradation of openness In this paper, we introduce a concept of intuitionistic gradation of openness on fuzzy subsets of a nonempty set X and define an intuitionistic fuzzy topological space. We prove that the category of intuitionistic fuzzy topological spaces and gradation preserving mappings is a topological category. We study compactness of intuitionistic fuzzy topological spaces and prove an analogue of Tychonoff's theorem.
Non-local Regularization of Inverse Problems This article proposes a new framework to regularize linear inverse problems using the total variation on non-local graphs. This non-local graph allows to adapt the penalization to the geometry of the underlying function to recover. A fast algorithm computes iteratively both the solution of the regularization process and the non-local graph adapted to this solution. We show numerical applications of this method to the resolution of image processing inverse problems such as inpainting, super-resolution and compressive sampling.
TEMPORAL AND SPATIAL SCALING FOR STEREOSCOPIC VIDEO COMPRESSION In stereoscopic video, it is well-known that compression efficiency can be improved, without sacrificing PSNR, by predicting one view from the other. Moreover, additional gain can be achieved by subsampling one of the views, since the Human Visual System can perceive high frequency information from the other view. In this work, we propose subsampling of one of the views by scaling its temporal rate and/or spatial size at regular intervals using a real-time stereoscopic H.264/AVC codec, and assess the subjective quality of the resulting videos using DSCQS test methodology. We show that stereoscopic videos can be coded at a rate about 1.2 times that of monoscopic videos with little visual quality degradation.
SPECO: Stochastic Perturbation based Clock tree Optimization considering temperature uncertainty Modern computing system applications or workloads can bring significant non-uniform temperature gradient on-chip, and hence can cause significant temperature uncertainty during clock-tree synthesis. Existing designs of clock-trees have to assume a given time-invariant worst-case temperature map but cannot deal with a set of temperature maps under a set of workloads. For robust clock-tree synthesis considering temperature uncertainty, this paper presents a new problem formulation: Stochastic PErturbation based Clock Optimization (SPECO). In SPECO algorithm, one nominal clock-tree is pre-synthesized with determined merging points. The impact from the stochastic temperature variation is modeled by perturbation (or small physical displacement) of merging points to offset the induced skews. Because the implementation cost is reduced but the design complexity is increased, the determination of optimal positions of perturbed merging points requires a computationally efficient algorithm. In this paper, one Non-Monte-Carlo (NMC) method is deployed to generate skew and skew variance by one-time analysis when a set of stochastic temperature maps is already provided. Moreover, one principal temperature-map analysis is developed to reduce the design complexity by clustering correlated merging points based on the subspace of the correlation matrix. As a result, the new merging points can be efficiently determined level by level with both skew and its variance reduced. The experimental results show that our SPECO algorithm can effectively reduce the clock-skew and its variance under a number of workloads with minimized wire-length overhead and computational cost.
1.207167
0.052625
0.035109
0.023705
0.002051
0.001352
0.000476
0.000234
0.000117
0.000027
0
0
0
0
Iterative Solvers for the Stochastic Finite Element Method This paper presents an overview and comparison of iterative solvers for linear stochastic partial differential equations (PDEs). A stochastic Galerkin finite element discretization is applied to transform the PDE into a coupled set of deterministic PDEs. Specialized solvers are required to solve the very high-dimensional systems that result after a finite element discretization of the resulting set. This paper discusses one-level iterative methods, based on matrix splitting techniques; multigrid methods, which apply a coarsening in the spatial dimension; and multilevel methods, which make use of the hierarchical structure of the stochastic discretization. Also Krylov solvers with suitable preconditioning are addressed. A local Fourier analysis provides quantitative convergence properties. The efficiency and robustness of the methods are illustrated on two nontrivial numerical problems. The multigrid solver with block smoother yields the most robust convergence properties, though a cheaper point smoother performs as well in most cases. Multilevel methods based on coarsening the stochastic dimension perform in general poorly due to a large computational cost per iteration. Moderate size problems can be solved very quickly by a Krylov method with a mean-based preconditioner. For larger spatial and stochastic discretizations, however, this approach suffers from its nonoptimal convergence properties.
Inversion of Robin coefficient by a spectral stochastic finite element approach This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.
Multigrid and sparse-grid schemes for elliptic control problems with random coefficients. A multigrid and sparse-grid computational approach to solving nonlinear elliptic optimal control problems with random coefficients is presented. The proposed scheme combines multigrid methods with sparse-grids collocation techniques. Within this framework the influence of randomness of problem’s coefficients on the control provided by the optimal control theory is investigated. Numerical results of computation of stochastic optimal control solutions and formulation of mean control functions are presented.
A Kronecker Product Preconditioner for Stochastic Galerkin Finite Element Discretizations The discretization of linear partial differential equations with random data by means of the stochastic Galerkin finite element method results in general in a large coupled linear system of equations. Using the stochastic diffusion equation as a model problem, we introduce and study a symmetric positive definite Kronecker product preconditioner for the Galerkin matrix. We compare the popular mean-based preconditioner with the proposed preconditioner which—in contrast to the mean-based construction—makes use of the entire information contained in the Galerkin matrix. We report on results of test problems, where the random diffusion coefficient is given in terms of a truncated Karhunen-Loève expansion or is a lognormal random field.
Efficient Solvers for a Linear Stochastic Galerkin Mixed Formulation of Diffusion Problems with Random Data We introduce a stochastic Galerkin mixed formulation of the steady-state diffusion equation and focus on the efficient iterative solution of the saddle-point systems obtained by combining standard finite element discretizations with two distinct types of stochastic basis functions. So-called mean-based preconditioners, based on fast solvers for scalar diffusion problems, are introduced for use with the minimum residual method. We derive eigenvalue bounds for the preconditioned system matrices and report on the efficiency of the chosen preconditioning schemes with respect to all the discretization parameters.
Reduced Basis Collocation Methods for Partial Differential Equations with Random Coefficients The sparse grid stochastic collocation method is a new method for solving partial differential equations with random coefficients. However, when the probability space has high dimensionality, the number of points required for accurate collocation solutions can be large, and it may be costly to construct the solution. We show that this process can be made more efficient by combining collocation with reduced basis methods, in which a greedy algorithm is used to identify a reduced problem to which the collocation method can be applied. Because the reduced model is much smaller, costs are reduced significantly. We demonstrate with numerical experiments that this is achieved with essentially no loss of accuracy.
Dimension–Adaptive Tensor–Product Quadrature We consider the numerical integration of multivariate functions defined over the unit hypercube. Here, we especially address the high–dimensional case, where in general the curse of dimension is encountered. Due to the concentration of measure phenomenon, such functions can often be well approximated by sums of lower–dimensional terms. The problem, however, is to find a good expansion given little knowledge of the integrand itself. The dimension–adaptive quadrature method which is developed and presented in this paper aims to find such an expansion automatically. It is based on the sparse grid method which has been shown to give good results for low- and moderate–dimensional problems. The dimension–adaptive quadrature method tries to find important dimensions and adaptively refines in this respect guided by suitable error estimators. This leads to an approach which is based on generalized sparse grid index sets. We propose efficient data structures for the storage and traversal of the index sets and discuss an efficient implementation of the algorithm. The performance of the method is illustrated by several numerical examples from computational physics and finance where dimension reduction is obtained from the Brownian bridge discretization of the underlying stochastic process.
Multi-Resolution-Analysis Scheme for Uncertainty Quantification in Chemical Systems This paper presents a multi-resolution approach for the propagation of parametric uncertainty in chemical systems. It is motivated by previous studies where Galerkin formulations of Wiener-Hermite expansions were found to fail in the presence of steep dependences of the species concentrations with regard to the reaction rates. The multi-resolution scheme is based on representation of the uncertain concentration in terms of compact polynomial multi-wavelets, allowing for the control of the convergence in terms of polynomial order and resolution level. The resulting representation is shown to greatly improve the robustness of the Galerkin procedure in presence of steep dependences. However, this improvement comes with a higher computational cost which drastically increases with the number of uncertain reaction rates. To overcome this drawback an adaptive strategy is proposed to control locally (in the parameter space) and in time the resolution level. The efficiency of the method is demonstrated for an uncertain chemical system having eight random parameters.
A Sparse Composite Collocation Finite Element Method for Elliptic SPDEs. This work presents a stochastic collocation method for solving elliptic PDEs with random coefficients and forcing term which are assumed to depend on a finite number of random variables. The method consists of a hierarchic wavelet discretization in space and a sequence of hierarchic collocation operators in the probability domain to approximate the solution's statistics. The selection of collocation points is based on a Smolyak construction of zeros of orthogonal polynomials with respect to the probability density function of each random input variable. A sparse composition of levels of spatial refinements and stochastic collocation points is then proposed and analyzed, resulting in a substantial reduction of overall degrees of freedom. Like in the Monte Carlo approach, the algorithm results in solving a number of uncoupled, purely deterministic elliptic problems, which allows the integration of existing fast solvers for elliptic PDEs. Numerical examples on two-dimensional domains will then demonstrate the superiority of this sparse composite collocation finite element method compared to the “full composite” collocation finite element method and the Monte Carlo method.
FastCap: a multipole accelerated 3-D capacitance extraction program A fast algorithm for computing the capacitance of a complicated three-dimensional geometry of ideal conductors in a uniform dielectric is described and its performance in the capacitance extractor FastCap is examined. The algorithm is an acceleration of the boundary-element technique for solving the integral equation associated with the multiconductor capacitance extraction problem. The authors present a generalized conjugate residual iterative algorithm with a multipole approximation to compute the iterates. This combination reduces the complexity so that accurate multiconductor capacitance calculations grow nearly as nm, where m is the number of conductors. Performance comparisons on integrated circuit bus crossing problems show that for problems with as few as 12 conductors the multipole accelerated boundary element method can be nearly 500 times faster than Gaussian-elimination-based algorithms, and five to ten times faster than the iterative method alone, depending on required accuracy
Block-sparsity: Coherence and efficient recovery We consider compressed sensing of block-sparse signals, i.e., sparse signals that have nonzero coefficients occurring in clusters. Based on an uncertainty relation for block-sparse signals, we define a block-coherence measure and show that a block-version of the orthogonal matching pursuit algorithm recovers block k-sparse signals in no more than k steps if the block-coherence is sufficiently small. The same condition on block-sparsity is shown to guarantee successful recovery through a mixed ℓ2/ℓ1 optimization approach. The significance of the results lies in the fact that making explicit use of block-sparsity can yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
An efficient algorithm for compressed MR imaging using total variation and wavelets Compressed sensing, an emerging multidisciplinary field involving mathematics, probability, optimization, and signal processing, focuses on reconstructing an unknown signal from a very limited number of samples. Because information such as boundaries of organs is very sparse in most MR images, compressed sensing makes it possible to reconstruct the same MR image from a very limited set of measurements significantly reducing the MRI scan duration. In order to do that however, one has to solve the difficult problem of minimizing nonsmooth functions on large data sets. To handle this, we propose an efficient algorithm that jointly minimizes the lscr1 norm, total variation, and a least squares measure, one of the most powerful models for compressive MR imaging. Our algorithm is based upon an iterative operator-splitting framework. The calculations are accelerated by continuation and takes advantage of fast wavelet and Fourier transforms enabling our code to process MR images from actual real life applications. We show that faithful MR images can be reconstructed from a subset that represents a mere 20 percent of the complete set of measurements.
Some specific types of fuzzy relation equations In this paper we study some specific types of fuzzy equations. More specifically, we focus on analyzing equations involving two fuzzy subsets of the same referential and a fuzzy relation defined over fuzzy subsets.
Analyzing parliamentary elections based on voting advice application data The main goal of this paper is to model the values of Finnish citizens and the members of the parliament. To achieve this goal, two databases are combined: voting advice application data and the results of the parliamentary elections in 2011. First, the data is converted to a high-dimension space. Then, it is projected to two principal components. The projection allows us to visualize the main differences between the parties. The value grids are produced with a kernel density estimation method without explicitly using the questions of the voting advice application. However, we find meaningful interpretations for the axes in the visualizations with the analyzed data. Subsequently, all candidate value grids are weighted by the results of the parliamentary elections. The result can be interpreted as a distribution grid for Finnish voters' values.
1.027991
0.045367
0.030245
0.010537
0.008208
0.003754
0.00036
0.000052
0.000002
0
0
0
0
0
QoS Provisioning in Converged Satellite and Terrestrial Networks: A Survey of the State-of-the-Art. It has been widely acknowledged that future networks will need to provide significantly more capacity than current ones in order to deal with the increasing traffic demands of the users. Particularly in regions where optical fibers are unlikely to be deployed due to economical constraints, this is a major challenge. One option to address this issue is to complement existing narrow-band terrestrial networks with additional satellite connections. Satellites cover huge areas, and recent developments have considerably increased the available capacity while decreasing the cost. However, geostationary satellite links have significantly different link characteristics than most terrestrial links, mainly due to the higher signal propagation time, which often renders them not suitable for delay intolerant traffic. This paper surveys the current state-of-the-art of satellite and terrestrial network convergence. We mainly focus on scenarios in which satellite networks complement existing terrestrial infrastructures, i.e., parallel satellite and terrestrial links exist, in order to provide high bandwidth connections while ideally achieving a similar end user quality-of-experience as in high bandwidth terrestrial networks. Thus, we identify the technical challenges associated with the convergence of satellite and terrestrial networks and analyze the related work. Based on this, we identify four key functional building blocks, which are essential to distribute traffic optimally between the terrestrial and the satellite networks. These are the traffic requirement identification function, the link characteristics identification function, as well as the traffic engineering function and the execution function. Afterwards, we survey current network architectures with respect to these key functional building blocks and perform a gap analysis, which shows that all analyzed network architectures require adaptations to effectively support converged satellite and terrestrial networks. Hence, we conclude by formulating several open research questions with respect to satellite and terrestrial network convergence.
A study on QoE-QoS relationship for multimedia services in satellite networks The quality of experience (QoE) of users has become an important factor concerned by service providers to keep users for their services. The measurable quality of service (QoS) refers to technical performance, not the satisfaction of users, but it is closely related to user's QoE. Most existing researches mainly focus on studying the QoE-QoS relationship for services in terrestrial network with rare attention on satellite networks. Multimedia services delivery over satellite networks is a promising service in emerging future Internet, thus it is appealing to analyze how service QoE depend on QoS in satellite networks. In this paper, we build a simulated satellite network based on OPENT software to measure the QoS parameters and obtain the distorted videos/voices. Then, based on the original and distorted video/voice sequences, we perform a subjective test to obtain the subjective opinion scores representing user's QoE. Finally, on the basis of the collected data, the influence of single QoS parameter on QoE is analyzed and the QoS parameter thresholds to different QoE levels are provided.
QoS/QoE Mapping and Adjustment Model in the Cloud-based Multimedia Infrastructure The quality of service (QoS) requirement and multicast service support from IP networks are two important factors for providing cloud-based multimedia services. However, QoS lacks an important element in characterizing multimedia services, namely, user perception. In this paper, we propose a QoS to quality of experience (QoE) mapping and adjustment model to translate the network QoS parameters into the user's QoE in the cloud-based multimedia infrastructure. The model is composed of three parts: QoE function, practical measurement and statistical analysis, and a simulated streaming video platform. We first discuss how to design the QoE function, and then use the practical measurement and statistical analysis to derive the optimum values of eight QoE parameters in the proposed QoE function. To map the network QoS parameters into the user's QoE, a simulated streaming video platform is used to denote a cloud-based multimedia infrastructure. Each multicast member that has guaranteed bandwidth in the simulated streaming video platform uses the QoE function to calculate its QoE score after watching the streaming video. If the QoE score is less than a derived lower bound value, it means that one of the multicast members has a low QoE. In this situation, the genetic algorithm is enabled to adjust the constructed diffserv-aware multicast tree to respond quickly to the degradation of QoE. The simulation results show that the user's QoE and network QoS are consistent with each other.
An example of real time QoE IPTV service estimator This paper will consider an estimator which includes mathematical modelling of physical channel parameters as information carrier and the weakest links in the telecommunication chain of information transfer. It will also identify necessary physical layer parameters which influence the quality of multimedia service delivery or QoE (Quality of Experience). With the modelling of the above mentioned parameters, the relation between degradations will be defined which appear in the channel between the user and the central telecommunication equipment with domination of one media used for information transfer with certain error probability. Degradations in a physical channel can be noticed by observing the change in values of channel transfer function or the appearance of increased noise. Estimation of QoE IPTV (Internet Protocol Television) service is especially necessary during delivery of real time service. In that case the mentioned degradations may appear in any moment and cause a packet loss.
Energy saving approaches for video streaming on smartphone based on QoE modeling In this paper, we study the influence of video stalling on QoE. We provide QoE models that are obtained in realistic scenarios on the smartphone, and provide energy-saving approaches for smartphone by leveraging the proposed QoE models in relation to energy. Results show that approximately 5J is saved in a 3 minutes video clip with an acceptable Mean Opinion Score (MOS) level when the video frames are skipped. If the video frames are not skipped, then it is suggested to avoid freezes during a video stream as the freezes highly increase the energy waste on the smartphones.
A survey of QoE assurance in converged networks High user satisfaction with using an application or service is the most meaningful quality evaluation criterion. For this reason the set of issues encompassed by the term quality of experience (QoE), i.e., the quality perceived subjectively by the end-user, is key to Internet service providers, network and software engineers, developers and scientists. From the technical point of view, to assure a high level of QoE, an appropriate level of quality of service (QoS), grade of service (GoS), and quality of resilience (QoR) must be provisioned by the network involved in service delivery. This paper studies QoE provisioning approaches with respect to the following convergence requirements: any service, anywhere, anytime, any user device, any media and networking technology, and by any operator. Challenges related to QoS, GoS and QoR provisioning in converged networks and implications on QoE provisioning are discussed. Convergence between fixed and wireless networks as well as within wireless networks based on different technologies, are considered. A variety of technologies and concepts for future converged networks are discussed.
Quality of experience for HTTP adaptive streaming services The growing consumer demand for mobile video services is one of the key drivers of the evolution of new wireless multimedia solutions requiring exploration of new ways to optimize future wireless networks for video services towards delivering enhanced quality of experience (QoE). One of these key video enhancing solutions is HTTP adaptive streaming (HAS), which has recently been spreading as a form of Internet video delivery and is expected to be deployed more broadly over the next few years. As a relatively new technology in comparison with traditional push-based adaptive streaming techniques, deployment of HAS presents new challenges and opportunities for content developers, service providers, network operators and device manufacturers. One of these important challenges is developing evaluation methodologies and performance metrics to accurately assess user QoE for HAS services, and effectively utilizing these metrics for service provisioning and optimizing network adaptation. In that vein, this article provides an overview of HAS concepts, and reviews the recently standardized QoE metrics and reporting framework in 3GPP. Furthermore, we present an end-to-end QoE evaluation study on HAS conducted over 3GPP LTE networks and conclude with a discussion of future challenges and opportunities in QoE optimization for HAS services.
Measures of similarity among fuzzy concepts: A comparative analysis Many measures of similarity among fuzzy sets have been proposed in the literature, and some have been incorporated into linguistic approximation procedures. The motivations behind these measures are both geometric and set-theoretic. We briefly review 19 such measures and compare their performance in a behavioral experiment. For crudely categorizing pairs of fuzzy concepts as either “similar” or “dissimilar,” all measures performed well. For distinguishing between degrees of similarity or dissimilarity, certain measures were clearly superior and others were clearly inferior; for a few subjects, however, none of the distance measures adequately modeled their similarity judgments. Measures that account for ordering on the base variable proved to be more highly correlated with subjects' actual similarity judgments. And, surprisingly, the best measures were ones that focus on only one “slice” of the membership function. Such measures are easiest to compute and may provide insight into the way humans judge similarity among fuzzy concepts.
Proactive public key and signature systems Emerging applications like electronic commerce and secure communications over open networks have made clear the fundamental role of public key cryptography as a unique enabler for world-wide scale security solu- tions. On the other hand, these solutions clearly expose the fact that the protection of private keys is a security bottleneck in these sensitive applications. This prob- lem is further worsened in the cases where a single and unchanged private key must be kept secret for very long time (such is the case of certification authority keys, bank and e-cash keys, etc.). One crucial defense against exposure of private keys is offered by threshold cryptography where the pri- vate key functions (like signatures or decryption) are distributed among several parties such that a predeter- mined number of parties must cooperate in order to correctly perform these operations. This protects keys from any single point of failure. An attacker needs to break into a multiplicity of locations before it can com- promise the system. However, in the case of long-lived keys the attacker still has a considerable period of time (like a few years) to gradually break the system. Here we present proactive public key systemswhere the threshold solutions are further enhanced by periodic
Statistical ordering of correlated timing quantities and its application for path ranking Correct ordering of timing quantities is essential for both timing analysis and design optimization in the presence of process variation, because timing quantities are no longer a deterministic value, but a distribution. This paper proposes a novel metric, called tiered criticalities, which guarantees to provide a unique order for a set of correlated timing quantities while properly taking into account full process space coverage. Efficient algorithms are developed to compute this metric, and its effectiveness on path ranking for at-speed testing is also demonstrated.
Bayesian compressive sensing for cluster structured sparse signals In traditional framework of compressive sensing (CS), only sparse prior on the property of signals in time or frequency domain is adopted to guarantee the exact inverse recovery. Other than sparse prior, structures on the sparse pattern of the signal have also been used as an additional prior, called model-based compressive sensing, such as clustered structure and tree structure on wavelet coefficients. In this paper, the cluster structured sparse signals are investigated. Under the framework of Bayesian compressive sensing, a hierarchical Bayesian model is employed to model both the sparse prior and cluster prior, then Markov Chain Monte Carlo (MCMC) sampling is implemented for the inference. Unlike the state-of-the-art algorithms which are also taking into account the cluster prior, the proposed algorithm solves the inverse problem automatically-prior information on the number of clusters and the size of each cluster is unknown. The experimental results show that the proposed algorithm outperforms many state-of-the-art algorithms.
R-POPTVR: a novel reinforcement-based POPTVR fuzzy neural network for pattern classification. In general, a fuzzy neural network (FNN) is characterized by its learning algorithm and its linguistic knowledge representation. However, it does not necessarily interact with its environment when the training data is assumed to be an accurate description of the environment under consideration. In interactive problems, it would be more appropriate for an agent to learn from its own experience through interactions with the environment, i.e., reinforcement learning. In this paper, three clustering algorithms are developed based on the reinforcement learning paradigm. This allows a more accurate description of the clusters as the clustering process is influenced by the reinforcement signal. They are the REINFORCE clustering technique I (RCT-I), the REINFORCE clustering technique II (RCT-II), and the episodic REINFORCE clustering technique (ERCT). The integrations of the RCT-I, the RCT-II, and the ERCT within the pseudo-outer product truth value restriction (POPTVR), which is a fuzzy neural network integrated with the truth restriction value (TVR) inference scheme in its five layered feedforward neural network, form the RPOPTVR-I, the RPOPTVR-II, and the ERPOPTVR, respectively. The Iris, Phoneme, and Spiral data sets are used for benchmarking. For both Iris and Phoneme data, the RPOPTVR is able to yield better classification results which are higher than the original POPTVR and the modified POPTVR over the three test trials. For the Spiral data set, the RPOPTVR-II is able to outperform the others by at least a margin of 5.8% over multiple test trials. The three reinforcement-based clustering techniques applied to the POPTVR network are able to exhibit the trial-and-error search characteristic that yields higher qualitative performance.
pFFT in FastMaxwell: a fast impedance extraction solver for 3D conductor structures over substrate In this paper we describe the acceleration algorithm implemented in FastMaxwell, a program for wideband electromagnetic extraction of complicated 3D conductor structures over substrate. FastMaxwell is based on the integral domain mixed potential integral equation (MPIE) formulation, with 3-D full-wave substrate dyadic Green's function kernel. Two dyadic Green's functions are implemented. The pre-corrected Fast Fourier Transform (pFFT) algorithm is generalized and used to accelerate the translational invariant complex domain dyadic kernel. Computational results are given for a variety of structures to validate the accuracy and efficiency of FastMaxwell. O(NlogN) computational complexity is demonstrated by our results in both time and memory.
Towards a linguistic probability theory The term "fuzzy probability" was first introduced in the 1970s but has since come to describe two distinct concepts which have been somewhat confused in the literature of the field. The first of these views fuzziness in probabilities as induced by fuzziness in the definition of events of interest, whereas the second uses fuzziness in probabilities as a way of modelling vagueness in subjective linguistic probability assignments. The difference between the two concepts of "fuzzy probability" examined is marked by relabelling the second "linguistic probability". An earlier attempt to provide a theory of such linguistic probabilities is then examined and found to place unreasonable restrictions on the choice of fuzzy probabilities. On the basis of this critique an improved theory is developed. Computational issues are then considered as a prelude to an example application: a Bayesian network with linguistically specified prior and conditional probabilities
1.11
0.1
0.036667
0.004
0.002
0.000778
0.000089
0
0
0
0
0
0
0
An Approach To Interval-Valued R-Implications And Automorphisms The aim of this work is to introduce an approach for interval-valued R-implications, which satisfy some analogous properties of R-implications. We show that the best interval representation of an R-implication that is obtained from a left continuous t-norm coincides with the interval-valued R-implication obtained from the best interval representation of such t-norm, whenever this is an inclusion monotonic interval function. This provides, under this condition, a nice characterization for the best interval representation of an R-implication, which is also an interval-valued R-implication. We also introduce interval-valued automorphisms as the best interval representations of automorphisms. It is shown that interval automorphisms act on interval R-implications, generating other interval R-implications.
Xor-Implications and E-Implications: Classes of Fuzzy Implications Based on Fuzzy Xor The main contribution of this paper is to introduce an autonomous definition of the connective ''fuzzy exclusive or'' (fuzzy Xor, for short), which is independent from others connectives. Also, two canonical definitions of the connective Xor are obtained from the composition of fuzzy connectives, and based on the commutative and associative properties related to the notions of triangular norms, triangular conorms and fuzzy negations. We show that the main properties of the classical connective Xor are preserved by the connective fuzzy Xor, and, therefore, this new definition of the connective fuzzy Xor extends the related classical approach. The definitions of fuzzy Xor-implications and fuzzy E-implications, induced by the fuzzy Xor connective, are also studied, and their main properties are analyzed. The relationships between the fuzzy Xor-implications and the fuzzy E-implications with automorphisms are explored.
Interval additive generators of interval t-norms and interval t-conorms. The aim of this paper is to introduce the concepts of interval additive generators of interval t-norms and interval t-conorms, as interval representations of additive generators of t-norms and t-conorms, respectively, considering both the correctness and the optimality criteria. The formalization of interval fuzzy connectives in terms of their interval additive generators provides a more systematic methodology for the selection of interval t-norms and interval t-conorms in the various applications of fuzzy systems. We also prove that interval additive generators satisfy the main properties of additive generators discussed in the literature.
Interval-valued Fuzzy Sets, Possibility Theory and Imprecise Probability Interval-valued fuzzy sets were proposed thirty years ago as a natural extension of fuzzy sets. Many variants of these mathematical objects ex- ist, under various names. One popular variant proposed by Atanassov starts by the specification of membership and non-membership functions. This paper focuses on interpretations of such ex- tensions of fuzzy sets, whereby the two member- ship functions that define them can be justified in the scope of some information representation paradigm. It particularly focuses on a recent pro- posal by Neumaier, who proposes to use interval- valued fuzzy sets under the name "clouds", as an e! cient method to represent a family of proba- bilities. We show the connection between clouds, interval-valued fuzzy sets and possibility theory.
Level sets and the extension principle for interval valued fuzzy sets and its application to uncertainty measures We describe the representation of a fuzzy subset in terms of its crisp level sets. We then generalize these level sets to the case of interval valued fuzzy sets and provide for a representation of an interval valued fuzzy set in terms of crisp level sets. We note that in this representation while the level sets are crisp the memberships are still intervals. Once having this representation we turn to its role in the extension principle and particularly to the extension of measures of uncertainty of interval valued fuzzy sets. Two types of extension of uncertainty measures are investigated. The first, based on the level set representation, leads to extensions whose values for the measure of uncertainty are themselves fuzzy sets. The second, based on the use of integrals, results in extensions whose value for the uncertainty of an interval valued fuzzy sets is an interval.
Multiattribute decision making based on interval-valued intuitionistic fuzzy values In this paper, we present a new multiattribute decision making method based on the proposed interval-valued intuitionistic fuzzy weighted average operator and the proposed fuzzy ranking method for intuitionistic fuzzy values. First, we briefly review the concepts of interval-valued intuitionistic fuzzy sets and the Karnik-Mendel algorithms. Then, we propose the intuitionistic fuzzy weighted average operator and interval-valued intuitionistic fuzzy weighted average operator, based on the traditional weighted average method and the Karnik-Mendel algorithms. Then, we propose a fuzzy ranking method for intuitionistic fuzzy values based on likelihood-based comparison relations between intervals. Finally, we present a new multiattribute decision making method based on the proposed interval-valued intuitionistic fuzzy weighted average operator and the proposed fuzzy ranking method for intuitionistic fuzzy values. The proposed method provides us with a useful way for multiattribute decision making based on interval-valued intuitionistic fuzzy values.
The pseudo-linear semantics of interval-valued fuzzy logics Triangle algebras are equationally defined structures that are equivalent with certain residuated lattices on a set of intervals, which are called interval-valued residuated lattices (IVRLs). Triangle algebras have been used to construct triangle logic (TL), a formal fuzzy logic that is sound and complete w.r.t. the class of IVRLs. In this paper, we prove that the so-called pseudo-prelinear triangle algebras are subdirect products of pseudo-linear triangle algebras. This can be compared with MTL-algebras (prelinear residuated lattices) being subdirect products of linear residuated lattices. As a consequence, we are able to prove the pseudo-chain completeness of pseudo-linear triangle logic (PTL), an axiomatic extension of TL introduced in this paper. This kind of completeness is the analogue of the chain completeness of monoidal T-norm based logic (MTL). This result also provides a better insight in the structure of triangle algebras; it enables us, amongst others, to prove properties of pseudo-prelinear triangle algebras more easily. It is known that there is a one-to-one correspondence between triangle algebras and couples (L,@a), in which L is a residuated lattice and @a an element in that residuated lattice. We give a schematic overview of some properties of pseudo-prelinear triangle algebras (and a number of others that can be imposed on a triangle algebra), and the according necessary and sufficient conditions on L and @a.
Subsethood, entropy, and cardinality for interval-valued fuzzy sets---An algebraic derivation In this paper a unified formulation of subsethood, entropy, and cardinality for interval-valued fuzzy sets (IVFSs) is presented. An axiomatic skeleton for subsethood measures in the interval-valued fuzzy setting is proposed, in order for subsethood to reduce to an entropy measure. By exploiting the equivalence between the structures of IVFSs and Atanassov's intuitionistic fuzzy sets (A-IFSs), the notion of average possible cardinality is presented and its connection to least and biggest cardinalities, proposed in [E. Szmidt, J. Kacprzyk, Entropy for intuitionistic fuzzy sets, Fuzzy Sets and Systems 118 (2001) 467-477], is established both algebraically and geometrically. A relation with the cardinality of fuzzy sets (FSs) is also demonstrated. Moreover, the entropy-subsethood and interval-valued fuzzy entropy theorems are stated and algebraically proved, which generalize the work of Kosko [Fuzzy entropy and conditioning, Inform. Sci. 40(2) (1986) 165-174; Fuzziness vs. probability, International Journal of General Systems 17(2-3) (1990) 211-240; Neural Networks and Fuzzy Systems, Prentice-Hall International, Englewood Cliffs, NJ, 1992; Intuitionistic Fuzzy Sets: Theory and Applications, Vol. 35 of Studies in Fuzziness and Soft Computing, Physica-Verlag, Heidelberg, 1999] for FSs. Finally, connections of the proposed subsethood and entropy measures for IVFSs with corresponding definitions for FSs and A-IFSs are provided.
Regular Expressions for Linear Sequential Circuits
Linguistic modeling by hierarchical systems of linguistic rules In this paper, we propose an approach to design linguistic models which are accurate to a high degree and may be suitably interpreted. This approach is based on the development of a hierarchical system of linguistic rules learning methodology. This methodology has been thought as a refinement of simple linguistic models which, preserving their descriptive power, introduces small changes to increase their accuracy. To do so, we extend the structure of the knowledge base of fuzzy rule base systems in a hierarchical way, in order to make it more flexible. This flexibilization will allow us to have linguistic rules defined over linguistic partitions with different granularity levels, and thus to improve the modeling of those problem subspaces where the former models have bad performance
Approximate spatial reasoning: integrating qualitative and quantitative constraints Approximate reasoning refers in general to a broad class of solution techniques where either the inference procedure or the environment for inference is imprecise. Algorithms for approximate spatial reasoning are important for coping with the widespread imprecision and uncertainty in the real world. This paper develops an integrated framework for representing induced spatial constraints between a set of landmarks given imprecise, incomplete, and possibly conflicting quantitative and qualitative information about them. Fuzzy logic is used as the computational basis for both representing quantitative information and interpreting linguistically expressed qualitative constraints.
Guidelines for Constructing Reusable Domain Ontologies The growing interest in ontologies is concomitant with the increasing use of agent systems in user environment. On- tologies have established themselves as schemas for encoding knowledge about a particular domain, which can be inter- preted by both humans and agents to accomplish a task in cooperation. However, construction of the domain ontolo- gies is a bottleneck, and planning towards reuse of domain ontologies is essential. Current methodologies concerned with ontology development have not dealt with explicit reuse of domain ontologies. This paper presents guidelines for systematic construction of reusable domain ontologies. A purpose-driven approach has been adopted. The guidelines have been used for constructing ontologies in the Experi- mental High-Energy Physics domain.
Effective corner-based techniques for variation-aware IC timing verification Traditional integrated circuit timing sign-off consists of verifying a design for a set of carefully chosen combinations of process and operating parameter extremes, referred to as corners. Such corners are usually chosen based on the knowledge of designers and process engineers, and are expected to cover the worst-case fabrication and operating scenarios. With increasingly more detailed attention to variability, the number of potential conditions to examine can be exponentially large, more than is possible to handle with straightforward exhaustive analysis. This paper presents efficient yet exact techniques for computing worstdelay and worst-slack corners of combinational and sequential digital integrated circuits. Results show that the proposed techniques enable efficient and accurate detection of failing conditions while accounting for timing variability due to process variations.
Overview of HEVC High-Level Syntax and Reference Picture Management The increasing proportion of video traffic in telecommunication networks puts an emphasis on efficient video compression technology. High Efficiency Video Coding (HEVC) is the forthcoming video coding standard that provides substantial bit rate reductions compared to its predecessors. In the HEVC standardization process, technologies such as picture partitioning, reference picture management, and parameter sets are categorized as “high-level syntax.” The design of the high-level syntax impacts the interface to systems and error resilience, and provides new functionalities. This paper presents an overview of the HEVC high-level syntax, including network abstraction layer unit headers, parameter sets, picture partitioning schemes, reference picture management, and supplemental enhancement information messages.
1.101218
0.051971
0.033335
0.026051
0.017072
0.002299
0.000502
0.00008
0.00001
0.000001
0
0
0
0
Contrast of a fuzzy relation In this paper we address a key problem in many fields: how a structured data set can be analyzed in order to take into account the neighborhood of each individual datum. We propose representing the dataset as a fuzzy relation, associating a membership degree with each element of the relation. We then introduce the concept of interval-contrast, a means of aggregating information contained in the immediate neighborhood of each element of the fuzzy relation. The interval-contrast measures the range of membership degrees present in each neighborhood. We use interval-contrasts to define the necessary properties of a contrast measure, construct several different local contrast and total contrast measures that satisfy these properties, and compare our expressions to other definitions of contrast appearing in the literature. Our theoretical results can be applied to several different fields. In an Appendix A, we apply our contrast expressions to photographic images.
Unified full implication algorithms of fuzzy reasoning This paper discusses the full implication inference of fuzzy reasoning. For all residuated implications induced by left continuous t-norms, unified @a-triple I algorithms are constructed to generalize the known results. As the corollaries of the main results of this paper, some special algorithms can be easily derived based on four important residuated implications. These algorithms would be beneficial to applications of fuzzy reasoning. Based on properties of residuated implications, the proofs of the many conclusions are greatly simplified.
On interval fuzzy S-implications This paper presents an analysis of interval-valued S-implications and interval-valued automorphisms, showing a way to obtain an interval-valued S-implication from two S-implications, such that the resulting interval-valued S-implication is said to be obtainable. Some consequences of that are: (1) the resulting interval-valued S-implication satisfies the correctness property, and (2) some important properties of usual S-implications are preserved by such interval representations. A relation between S-implications and interval-valued S-implications is outlined, showing that the action of an interval-valued automorphism on an interval-valued S-implication produces another interval-valued S-implication.
Lattices of fuzzy sets and bipolar fuzzy sets, and mathematical morphology Mathematical morphology is based on the algebraic framework of complete lattices and adjunctions, which endows it with strong properties and allows for multiple extensions. In particular, extensions to fuzzy sets of the main morphological operators, such as dilation and erosion, can be done while preserving all properties of these operators. Another extension concerns bipolar fuzzy sets, where both positive information and negative information are handled, along with their imprecision. We detail these extensions from the point of view of the underlying lattice structure. In the case of bipolarity, its two-components nature raises the question of defining a proper partial ordering. In this paper, we consider Pareto (component-wise) and lexicographic orderings.
Is there a need for fuzzy logic? ''Is there a need for fuzzy logic?'' is an issue which is associated with a long history of spirited discussions and debate. There are many misconceptions about fuzzy logic. Fuzzy logic is not fuzzy. Basically, fuzzy logic is a precise logic of imprecision and approximate reasoning. More specifically, fuzzy logic may be viewed as an attempt at formalization/mechanization of two remarkable human capabilities. First, the capability to converse, reason and make rational decisions in an environment of imprecision, uncertainty, incompleteness of information, conflicting information, partiality of truth and partiality of possibility - in short, in an environment of imperfect information. And second, the capability to perform a wide variety of physical and mental tasks without any measurements and any computations [L.A. Zadeh, From computing with numbers to computing with words - from manipulation of measurements to manipulation of perceptions, IEEE Transactions on Circuits and Systems 45 (1999) 105-119; L.A. Zadeh, A new direction in AI - toward a computational theory of perceptions, AI Magazine 22 (1) (2001) 73-84]. In fact, one of the principal contributions of fuzzy logic - a contribution which is widely unrecognized - is its high power of precisiation. Fuzzy logic is much more than a logical system. It has many facets. The principal facets are: logical, fuzzy-set-theoretic, epistemic and relational. Most of the practical applications of fuzzy logic are associated with its relational facet. In this paper, fuzzy logic is viewed in a nonstandard perspective. In this perspective, the cornerstones of fuzzy logic - and its principal distinguishing features - are: graduation, granulation, precisiation and the concept of a generalized constraint. A concept which has a position of centrality in the nontraditional view of fuzzy logic is that of precisiation. Informally, precisiation is an operation which transforms an object, p, into an object, p^*, which in some specified sense is defined more precisely than p. The object of precisiation and the result of precisiation are referred to as precisiend and precisiand, respectively. In fuzzy logic, a differentiation is made between two meanings of precision - precision of value, v-precision, and precision of meaning, m-precision. Furthermore, in the case of m-precisiation a differentiation is made between mh-precisiation, which is human-oriented (nonmathematical), and mm-precisiation, which is machine-oriented (mathematical). A dictionary definition is a form of mh-precisiation, with the definiens and definiendum playing the roles of precisiend and precisiand, respectively. Cointension is a qualitative measure of the proximity of meanings of the precisiend and precisiand. A precisiand is cointensive if its meaning is close to the meaning of the precisiend. A concept which plays a key role in the nontraditional view of fuzzy logic is that of a generalized constraint. If X is a variable then a generalized constraint on X, GC(X), is expressed as X isr R, where R is the constraining relation and r is an indexical variable which defines the modality of the constraint, that is, its semantics. The primary constraints are: possibilistic, (r=blank), probabilistic (r=p) and veristic (r=v). The standard constraints are: bivalent possibilistic, probabilistic and bivalent veristic. In large measure, science is based on standard constraints. Generalized constraints may be combined, qualified, projected, propagated and counterpropagated. The set of all generalized constraints, together with the rules which govern generation of generalized constraints, is referred to as the generalized constraint language, GCL. The standard constraint language, SCL, is a subset of GCL. In fuzzy logic, propositions, predicates and other semantic entities are precisiated through translation into GCL. Equivalently, a semantic entity, p, may be precisiated by representing its meaning as a generalized constraint. By construction, fuzzy logic has a much higher level of generality than bivalent logic. It is the generality of fuzzy logic that underlies much of what fuzzy logic has to offer. Among the important contributions of fuzzy logic are the following: 1.FL-generalization. Any bivalent-logic-based theory, T, may be FL-generalized, and hence upgraded, through addition to T of concepts and techniques drawn from fuzzy logic. Examples: fuzzy control, fuzzy linear programming, fuzzy probability theory and fuzzy topology. 2.Linguistic variables and fuzzy if-then rules. The formalism of linguistic variables and fuzzy if-then rules is, in effect, a powerful modeling language which is widely used in applications of fuzzy logic. Basically, the formalism serves as a means of summarization and information compression through the use of granulation. 3.Cointensive precisiation. Fuzzy logic has a high power of cointensive precisiation. This power is needed for a formulation of cointensive definitions of scientific concepts and cointensive formalization of human-centric fields such as economics, linguistics, law, conflict resolution, psychology and medicine. 4.NL-Computation (computing with words). Fuzzy logic serves as a basis for NL-Computation, that is, computation with information described in natural language. NL-Computation is of direct relevance to mechanization of natural language understanding and computation with imprecise probabilities. More generally, NL-Computation is needed for dealing with second-order uncertainty, that is, uncertainty about uncertainty, or uncertainty^2 for short. In summary, progression from bivalent logic to fuzzy logic is a significant positive step in the evolution of science. In large measure, the real-world is a fuzzy world. To deal with fuzzy reality what is needed is fuzzy logic. In coming years, fuzzy logic is likely to grow in visibility, importance and acceptance.
Interval valued QL-implications The aim of this work is to analyze the interval canonical representation for fuzzy QL-implications and automorphisms. Intervals have been used to model the uncertainty of a specialist's information related to truth values in the fuzzy propositional calculus: the basic systems are based on interval fuzzy connectives. Thus, using subsets of the real unit interval as the standard sets of truth degrees and applying continuous t-norms, t-conorms and negation as standard truth interval functions, the standard truth interval function of an QL-implication can be obtained. Interesting results on the analysis of interval canonical representation for fuzzy QL-implications and automorphisms are presented. In addition, commutative diagrams are used in order to understand how an interval automorphism acts on interval QL-implications, generating other interval fuzzy QL-implications.
Advances in type-2 fuzzy sets and systems In this state-of-the-art paper, important advances that have been made during the past five years for both general and interval type-2 fuzzy sets and systems are described. Interest in type-2 subjects is worldwide and touches on a broad range of applications and many interesting theoretical topics. The main focus of this paper is on the theoretical topics, with descriptions of what they are, what has been accomplished, and what remains to be done.
Fuzzy Algorithms
Comment on: "Image thresholding using type II fuzzy sets". Importance of this method In this work we develop some reflections on the thresholding algorithm proposed by Tizhoosh in [16]. The purpose of these reflections is to complete the considerations published recently in [17,18] on said algorithm. We also prove that under certain constructions, Tizhoosh's algorithm makes it possible to obtain additional information from commonly used fuzzy algorithms.
Fuzzy subsethood for fuzzy sets of type-2 and generalized type-n In this paper, we use Zadeh's extension principle to extend Kosko's definition of the fuzzy subsethood measure S(G, H) to type-2 fuzzy sets defined on any set X equipped with a measure. Subsethood is itself a fuzzy set that is a crisp interval when G and H are interval type-2 sets. We show how to compute this interval and then use the result to compute subsethood for general type-2 fuzzy sets. A definition of subsethood for arbitrary fuzzy sets of type-n > 2 is then developed. This subsethood is a type-(n - 1) fuzzy set, and we provide a procedure to compute subsethood of interval type-3 fuzzy sets.
MIMO technologies in 3GPP LTE and LTE-advanced 3rd Generation Partnership Project (3GPP) has recently completed the specification of the Long Term Evolution (LTE) standard. Majority of the world's operators and vendors are already committed to LTE deployments and developments, making LTE the market leader in the upcoming evolution to 4G wireless communication systems. Multiple input multiple output (MIMO) technologies introduced in LTE such as spatial multiplexing, transmit diversity, and beamforming are key components for providing higher peak rate at a better system efficiency, which are essential for supporting future broadband data service over wireless links. Further extension of LTE MIMO technologies is being studied under the 3GPP study item "LTE-Advanced" to meet the requirement of IMT-Advanced set by International Telecommunication Union Radiocommunication Sector (ITU-R). In this paper, we introduce various MIMO technologies employed in LTE and provide a brief overview on the MIMO technologies currently discussed in the LTE-Advanced forum.
Tensor-Train Decomposition A simple nonrecursive form of the tensor decomposition in $d$ dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.
Performance analysis of partial segmented compressed sampling Recently, a segmented AIC (S-AIC) structure that measures the analog signal by K parallel branches of mixers and integrators (BMIs) was proposed by Taheri and Vorobyov (2011). Each branch is characterized by a random sampling waveform and implements integration in several continuous and non-overlapping time segments. By permuting the subsamples collected by each segment at different BMIs, more than K samples can be generated. To reduce the complexity of the S-AIC, in this paper we propose a partial segmented AIC (PS-AIC) structure, where K branches are divided into J groups and each group, acting as an independent S-AIC, only works within a partial period that is non-overlapping in time. Our structure is inspired by the recent validation that block diagonal matrices satisfy the restricted isometry property (RIP). Using this fact, we prove that the equivalent measurement matrix of the PS-AIC satisfies the RIP when the number of samples exceeds a certain threshold. Furthermore, the recovery performance of the proposed scheme is developed, where the analytical results show its performance gain when compared with the conventional AIC. Simulations verify the effectiveness of the PS-AIC and the validity of our theoretical results.
Sparsity Regularization for Radon Measures In this paper we establish a regularization method for Radon measures. Motivated from sparse L 1 regularization we introduce a new regularization functional for the Radon norm, whose properties are then analyzed. We, furthermore, show well-posedness of Radon measure based sparsity regularization. Finally we present numerical examples along with the underlying algorithmic and implementation details. We shall, here, see that the number of iterations turn out of utmost importance when it comes to obtain reliable reconstructions of sparse data with varying intensities.
1.217402
0.043739
0.02433
0.024156
0.00499
0.001044
0.000235
0.000066
0.000014
0.000003
0
0
0
0
Energy saving approaches for video streaming on smartphone based on QoE modeling In this paper, we study the influence of video stalling on QoE. We provide QoE models that are obtained in realistic scenarios on the smartphone, and provide energy-saving approaches for smartphone by leveraging the proposed QoE models in relation to energy. Results show that approximately 5J is saved in a 3 minutes video clip with an acceptable Mean Opinion Score (MOS) level when the video frames are skipped. If the video frames are not skipped, then it is suggested to avoid freezes during a video stream as the freezes highly increase the energy waste on the smartphones.
An example of real time QoE IPTV service estimator This paper will consider an estimator which includes mathematical modelling of physical channel parameters as information carrier and the weakest links in the telecommunication chain of information transfer. It will also identify necessary physical layer parameters which influence the quality of multimedia service delivery or QoE (Quality of Experience). With the modelling of the above mentioned parameters, the relation between degradations will be defined which appear in the channel between the user and the central telecommunication equipment with domination of one media used for information transfer with certain error probability. Degradations in a physical channel can be noticed by observing the change in values of channel transfer function or the appearance of increased noise. Estimation of QoE IPTV (Internet Protocol Television) service is especially necessary during delivery of real time service. In that case the mentioned degradations may appear in any moment and cause a packet loss.
The Impact Of Interactivity On The Qoe: A Preliminary Analysis The interactivity in multimedia services concerns the input/output process of the user with the system, as well as its cooperativity. It is an important element that affects the overall Quality of Experience (QoE), which may even mask the impact of the quality level of the (audio and visual) signal itself on the overall user perception. This work is a preliminary study aimed at evaluating the weight of the interactivity, which relies on subjective assessments that have been conducted varying the artefacts, genre and interactivity features on video streaming services evaluated by the subjects. Subjective evaluations have been collected from 25 subjects in compliance with ITU-T Recommendation P. 910 through single-stimulus Absolute Category Rating (ACR). It resulted that the impact of the interactivity is influenced by the presence of other components, such as presence of buffer starvations and type of content displayed. An objective quality metric able to measure the influence of the interactivity on the QoE has also been defined, which has proved to be highly correlated with subjective results. We concluded that the interactivity feature can be successfully represented by either an additive or a multiplicative component to be added in existing quality metrics.
QoE Evaluation of Multimedia Services Based on Audiovisual Quality and User Interest. Quality of experience (QoE) has significant influence on whether or not a user will choose a service or product in the competitive era. For multimedia services, there are various factors in a communication ecosystem working together on users, which stimulate their different senses inducing multidimensional perceptions of the services, and inevitably increase the difficulty in measurement and estim...
QoE-oriented 3D video transcoding for mobile streaming With advance in mobile 3D display, mobile 3D video is already enabled by the wireless multimedia networking, and it will be gradually popular since it can make people enjoy the natural 3D experience anywhere and anytime. In current stage, mobile 3D video is generally delivered over the heterogeneous network combined by wired and wireless channels. How to guarantee the optimal 3D visual quality of experience (QoE) for the mobile 3D video streaming is one of the important topics concerned by the service provider. In this article, we propose a QoE-oriented transcoding approach to enhance the quality of mobile 3D video service. By learning the pre-controlled QoE patterns of 3D contents, the proposed 3D visual QoE inferring model can be utilized to regulate the transcoding configurations in real-time according to the feedbacks of network and user-end device information. In the learning stage, we propose a piecewise linear mean opinion score (MOS) interpolation method to further reduce the cumbersome manual work of preparing QoE patterns. Experimental results show that the proposed transcoding approach can provide the adapted 3D stream to the heterogeneous network, and further provide superior QoE performance to the fixed quantization parameter (QP) transcoding and mean squared error (MSE) optimized transcoding for mobile 3D video streaming.
QoE-Based Cross-Layer Optimization of Wireless Video with Unperceivable Temporal Video Quality Fluctuation This paper proposes a novel approach for Quality of Experience (QoE) driven cross-layer optimization for wireless video transmission. We formulate the cross-layer optimization problem with a constraint on the temporal fluctuation of the video quality. Our objective is to minimize the temporal change of the video quality as perceivable quality fluctuations negatively affect the overall quality of experience. The proposed QoE scheme jointly optimizes the application layer and the lower layers of a wireless protocol stack. It allocates network resources and performs rate adaptation such that the fluctuations lie within the range of unperceivable changes. We determine corresponding perception thresholds via extensive subjective tests and evaluate the proposed scheme using an OPNET High Speed Downlink Packet Access (HSDPA) emulator. Our simulation results show that the proposed approach leads to a noticeable improvement of overall user satisfaction for the provided video delivery service when compared to state-of-the-art approaches.
A Novel QoE-Based Carrier Scheduling Scheme in LTE-Advanced Networks with Multi-Service Carrier aggregation is one of the key techniques for the advancement of long-term evolution (LTE- Advanced) networks. This article proposes a quality- of-experience (QoE)-based carrier scheduling scheme for networks with multiple services. The proposed scheme aims at maximizing the user QoE, which is determined by both the application-level and network-level quality of services. Packet delay, as an essential factor affecting QoE, is first discussed under the context of QoE optimization as well as the data rate. The component carriers are dynamically scheduled according to the network traffic load by the proposed novel scheme. Simulation results show that our approach can achieve significant improvement in QoE and fairness over conventional approaches.
A generic quantitative relationship between quality of experience and quality of service Quality of experience ties together user perception, experience, and expectations to application and network performance, typically expressed by quality of service parameters. Quantitative relationships between QoE and QoS are required in order to be able to build effective QoE control mechanisms onto measurable QoS parameters. Against this background, this article proposes a generic formula in which QoE and QoS parameters are connected through an exponential relationship, called IQX hypothesis. The formula relates changes of QoE with respect to QoS to the current level of QoE, is simple to match, and its limit behaviors are straightforward to interpret. It validates the IQX hypothesis for streaming services, where QoE in terms of Mean Opinion Scores is expressed as functions of loss and reordering ratio, the latter of which is caused by jitter. For web surfing as the second application area, matchings provided by the IQX hypothesis are shown to outperform previously published logarithmic functions. We conclude that the IQX hypothesis is a strong candidate to be taken into account when deriving relationships between QoE and QoS parameters.
Control plane design in multidomain/multilayer optical networks As optical networks proliferate, there is a growing need to address distributed multi- domain provisioning. Although multi-domain operation has been well-studied in packet/cell- switching networks, the multilayer (granularity) circuit-switched nature of modern optical networks presents a unique set of challenges. This survey addresses control plane design for such heterogeneous infrastructures and describes new challenges in the areas of state dissemination, path computation, and survivability. Sample results from a recent study also are presented.
Galerkin Finite Element Approximations of Stochastic Elliptic Partial Differential Equations We describe and analyze two numerical methods for a linear elliptic problem with stochastic coefficients and homogeneous Dirichlet boundary conditions. Here the aim of the computations is to approximate statistical moments of the solution, and, in particular, we give a priori error estimates for the computation of the expected value of the solution. The first method generates independent identically distributed approximations of the solution by sampling the coefficients of the equation and using a standard Galerkin finite element variational formulation. The Monte Carlo method then uses these approximations to compute corresponding sample averages. The second method is based on a finite dimensional approximation of the stochastic coefficients, turning the original stochastic problem into a deterministic parametric elliptic problem. A Galerkin finite element method, of either the h- or p-version, then approximates the corresponding deterministic solution, yielding approximations of the desired statistics. We present a priori error estimates and include a comparison of the computational work required by each numerical approximation to achieve a given accuracy. This comparison suggests intuitive conditions for an optimal selection of the numerical approximation.
Dequantizing Compressed Sensing: When Oversampling and Non-Gaussian Constraints Combine In this paper, we study the problem of recovering sparse or compressible signals from uniformly quantized measurements. We present a new class of convex optimization programs, or decoders, coined Basis Pursuit DeQuantizer of moment p (BPDQp), that model the quantization distortion more faithfully than the commonly used Basis Pursuit DeNoise (BPDN) program. Our decoders proceed by minimizing the sparsity of the signal to be reconstructed subject to a data-fidelity constraint expressed in the ℓp-norm of the residual error for 2 ≤ p ≤ ∞. We show theoretically that, (i) the reconstruction error of these new decoders is bounded if the sensing matrix satisfies an extended Restricted Isometry Property involving the Iρ norm, and (ii), for Gaussian random matrices and uniformly quantized measurements, BPDQp performance exceeds that of BPDN by dividing the reconstruction error due to quantization by √(p + 1). This last effect happens with high probability when the number of measurements exceeds a value growing with p, i.e., in an oversampled situation compared to what is commonly required by BPDN = BPDQ2. To demonstrate the theoretical power of BPDQp, we report numerical simulations on signal and image reconstruction problems.
PHDD: an efficient graph representation for floating point circuit verification Data structures such as *BMDs, HDDs, and K*BMDs provide compact representations for functions which map Boolean vectors into integer values, but not floating point values. In this paper, we propose a new data structure, called Multiplicative Power Hybrid Decision Diagrams (*PHDDs), to provide a compact representation for functions that map Boolean vectors into integer or floating point values. The size of the graph to represent the IEEE floating point encoding is linear with the word size. The complexity of floating point multiplication grows linearly with the word size. The complexity of floating point addition grows exponentially with the size of the exponent part, but linearly with the size of the mantissa part. We applied *PHDDs to verify integer multipliers and floating point multipliers before the rounding stage, based on a hierarchical verification approach. For integer multipliers, our results are at least 6 times faster than *BMDs. Previous attempts at verifying floating point multipliers required manual intervention. We verified floating point multipliers before the rounding stage automatically.
An integrated quantitative and qualitative FMCDM model for location choices International logistics is a very popular and important issue in the present international supply chain system. In order to reduce the international supply chain operation cost, it is very important for enterprises to invest in the international logistics centers. Although a number of research approaches for solving decision-making problems have been proposed, most of these approaches focused on developing quantitative models for dealing with objective data or qualitative models for dealing with subjective ratings. Few researchers proposed approaches for dealing with both objective data and subjective ratings. Thus, this paper attempts to fill this gap in current literature by establishing an integrated quantitative and qualitative fuzzy multiple criteria decision-making model for dealing with both objective crisp data and subjective fuzzy ratings. Finally, the utilization of the proposed model is demonstrated with a case study on location choices of international distribution centers.
Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation. This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.
1.10204
0.10408
0.10408
0.05204
0.02602
0.013314
0.0022
0.000376
0.000039
0
0
0
0
0
Hybrid Gauss-Trapezoidal Quadrature Rules A new class of quadrature rules for the integration of both regular and singular functions is constructed and analyzed. For each rule the quadrature weights are positive and the class includes rules of arbitrarily high-order convergence. The quadratures result from alterations to the trapezoidal rule, in which a small number of nodes and weights at the ends of the integration interval are replaced. The new nodes and weights are determined so that the asymptotic expansion of the resulting rule, provided by a generalization of the Euler--Maclaurin summation formula, has a prescribed number of vanishing terms. The superior performance of the rules is demonstrated with numerical examples and application to several problems is discussed.
Clenshaw-Curtis and Gauss-Legendre Quadrature for Certain Boundary Element Integrals Following a recent article by Trefethen [SIAM Review, 50 (2008), pp. 67-87], the use of Clenshaw-Curtis quadrature rather than Gauss-Legendre quadrature for nearly singular integrals which arise in the boundary element method has been investigated. When these quadrature rules are used in association with the sinh-transformation, the authors have concluded, after considering asymptotic estimates of the truncation errors for certain proto-type functions arising in this context, that Gauss-Legendre quadrature should continue to be the preferred quadrature rule.
Rough and ready error estimates in Gaussian integration of analytic functions Two expressions are derived for use in estimating the error in the numerical integration of analytic functions in terms of the maximum absolute value of the function in an appropriate region of regularity. These expressions are then specialized to the case of Gaussian integration rules, and the resulting error estimates are compared with those obtained by the use of tables of error coefficients.
Series Methods For Integration
Error Estimation In Clenshaw-Curtis Quadrature Formula
A practical algorithm for computing Cauchy principal value integrals of oscillatory functions. A new automatic quadrature scheme is proposed for evaluating Cauchy principal value integrals of oscillatory functions: ⨍-11f(x)exp(iωx)(x-τ)-1dx(-1<τ<1,ω∈R). The desired approximation is obtained by expanding the function f in the series of Chebyshev polynomials of the first kind, and then by constructing the indefinite integral for a properly modified integrand, to overcome the singularity. The method is proved to converge uniformly, with respect to both τ and ω, for any function f satisfying max−1⩽x⩽1∣f′(x)∣<∞.
Parameterized model order reduction via a two-directional Arnoldi process This paper presents a multiparameter moment-matching based model order reduction technique for parameterized interconnect networks via a novel two-directional Arnoldi process. It is referred to as a PIMTAP algorithm, which stands for Parameterized Interconnect Macromodeling algorithm via a Two-directional Arnoldi Process. PIMTAP inherits the advantages of previous multiparameter moment-matching algorithms and avoids their shortfalls. It is numerically stable and adaptive, and preserves the passivity of parameterized RLC networks.
General-Purpose Nonlinear Model-Order Reduction Using Piecewise-Polynomial Representations We present algorithms for automated macromodeling of nonlinear mixed-signal system blocks. A key feature of our methods is that they automate the generation of general-purpose macromodels that are suitable for a wide range of time- and frequency-domain analyses important in mixed-signal design flows. In our approach, a nonlinear circuit or system is approximated using piecewise-polynomial (PWP) representations. Each polynomial system is reduced to a smaller one via weakly nonlinear polynomial model-reduction methods. Our approach, dubbed PWP, generalizes recent trajectory-based piecewise-linear approaches and ties them with polynomial-based model-order reduction, which inherently captures stronger nonlinearities within each region. PWP-generated macromodels not only reproduce small-signal distortion and intermodulation properties well but also retain fidelity in large-signal transient analyses. The reduced models can be used as drop-in replacements for large subsystems to achieve fast system-level simulation using a variety of time- and frequency-domain analyses (such as dc, ac, transient, harmonic balance, etc.). For the polynomial reduction step within PWP, we also present a novel technique [dubbed multiple pseudoinput (MPI)] that combines concepts from proper orthogonal decomposition with Krylov-subspace projection. We illustrate the use of PWP and MPI with several examples (including op-amps and I/O buffers) and provide important implementation details. Our experiments indicate that it is easy to obtain speedups of about an order of magnitude with push-button nonlinear macromodel-generation algorithms.
Probabilistic models for stochastic elliptic partial differential equations Mathematical requirements that the random coefficients of stochastic elliptical partial differential equations must satisfy such that they have unique solutions have been studied extensively. Yet, additional constraints that these coefficients must satisfy to provide realistic representations for physical quantities, referred to as physical requirements, have not been examined systematically. It is shown that current models for random coefficients constructed solely by mathematical considerations can violate physical constraints and, consequently, be of limited practical use. We develop alternative models for the random coefficients of stochastic differential equations that satisfy both mathematical and physical constraints. Theoretical arguments are presented to show potential limitations of current models and establish properties of the models developed in this study. Numerical examples are used to illustrate the construction of the proposed models, assess the performance of these models, and demonstrate the sensitivity of the solutions of stochastic differential equations to probabilistic characteristics of their random coefficients.
A dynamically bi-orthogonal method for time-dependent stochastic partial differential equations I: Derivation and algorithms We propose a dynamically bi-orthogonal method (DyBO) to solve time dependent stochastic partial differential equations (SPDEs). The objective of our method is to exploit some intrinsic sparse structure in the stochastic solution by constructing the sparsest representation of the stochastic solution via a bi-orthogonal basis. It is well-known that the Karhunen-Loeve expansion (KLE) minimizes the total mean squared error and gives the sparsest representation of stochastic solutions. However, the computation of the KL expansion could be quite expensive since we need to form a covariance matrix and solve a large-scale eigenvalue problem. The main contribution of this paper is that we derive an equivalent system that governs the evolution of the spatial and stochastic basis in the KL expansion. Unlike other reduced model methods, our method constructs the reduced basis on-the-fly without the need to form the covariance matrix or to compute its eigendecomposition. In the first part of our paper, we introduce the derivation of the dynamically bi-orthogonal formulation for SPDEs, discuss several theoretical issues, such as the dynamic bi-orthogonality preservation and some preliminary error analysis of the DyBO method. We also give some numerical implementation details of the DyBO methods, including the representation of stochastic basis and techniques to deal with eigenvalue crossing. In the second part of our paper [11], we will present an adaptive strategy to dynamically remove or add modes, perform a detailed complexity analysis, and discuss various generalizations of this approach. An extensive range of numerical experiments will be provided in both parts to demonstrate the effectiveness of the DyBO method.
Stable recovery of sparse overcomplete representations in the presence of noise Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
Fault-tolerance in the Borealis distributed stream processing system We present a replication-based approach to fault-tolerant distributed stream processing in the face of node failures, network failures, and network partitions. Our approach aims to reduce the degree of inconsistency in the system while guaranteeing that available inputs capable of being processed are processed within a specified time threshold. This threshold allows a user to trade availability for consistency: a larger time threshold decreases availability but limits inconsistency, while a smaller threshold increases availability but produces more inconsistent results based on partial data. In addition, when failures heal, our scheme corrects previously produced results, ensuring eventual consistency.Our scheme uses a data-serializing operator to ensure that all replicas process data in the same order, and thus remain consistent in the absence of failures. To regain consistency after a failure heals, we experimentally compare approaches based on checkpoint/redo and undo/redo techniques and illustrate the performance trade-offs between these schemes.
Generalized rough sets based on relations Rough set theory has been proposed by Pawlak as a tool for dealing with the vagueness and granularity in information systems. The core concepts of classical rough sets are lower and upper approximations based on equivalence relations. This paper studies arbitrary binary relation based generalized rough sets. In this setting, a binary relation can generate a lower approximation operation and an upper approximation operation, but some of common properties of classical lower and upper approximation operations are no longer satisfied. We investigate conditions for a relation under which these properties hold for the relation based lower and upper approximation operations.
Granular Association Rules for Multiple Taxonomies: A Mass Assignment Approach The use of hierarchical taxonomies to organise information (or sets of objects) is a common approach for the semantic web and elsewhere, and is based on progressively finer granulations of objects. In many cases, seemingly crisp granulation disguises the fact that categories are based on loosely defined concepts that are better modelled by allowing graded membership. A related problem arises when different taxonomies are used, with different structures, as the integration process may also lead to fuzzy categories. Care is needed when information systems use fuzzy sets to model graded membership in categories - the fuzzy sets are not disjunctive possibility distributions, but must be interpreted conjunctively. We clarify this distinction and show how an extended mass assignment framework can be used to extract relations between fuzzy categories. These relations are association rules and are useful when integrating multiple information sources categorised according to different hierarchies. Our association rules do not suffer from problems associated with use of fuzzy cardinalities. Experimental results on discovering association rules in film databases and terrorism incident databases are demonstrated.
1.078396
0.052824
0.037655
0.016986
0.012561
0.000067
0.000034
0.000009
0.000004
0.000001
0
0
0
0
Statistical timing analysis for intra-die process variations with spatial correlations Process variations have become a critical issue in performance verification of high-performance designs. We present a new, statistical timing analysis method that accounts for inter- and intra-die process variations and their spatial correlations. Since statistical timing analysis has an exponential run time complexity, we propose a method whereby a statistical bound on the probability distribution function of the exact circuit delay is computed with linear run time. First, we develop a model for representing inter- and intra-die variations and their spatial correlations. Using this model, we then show how gate delays and arrival times can be represented as a sum of components, such that the correlation information between arrival times and gate delays is preserved. We then show how arrival times are propagated and merged in the circuit to obtain an arrival time distribution that is an upper bound on the distribution of the exact circuit delay. We prove the correctness of the bound and also show how the bound can be improved by propagating multiple arrival times. The proposed algorithms were implemented and tested on a set of benchmark circuits under several process variation scenarios. The results were compared with Monte Carlo simulation and show an accuracy of 3.32% on average over all test cases.
Speeding up Monte-Carlo Simulation for Statistical Timing Analysis of Digital Integrated Circuits This paper presents a pair of novel techniques to speed-up path-based Monte-Carlo simulation for statistical timing analysis of digital integrated circuits with no loss of accuracy. The presented techniques can be used in isolation or they could be used together. Both techniques can be readily implemented in any statistical timing framework. We compare our proposed Monte-Carlo simulation with traditional Monte-Carlo simulation in a rigourous framework and show that the new method is up to 2 times as efficient as the traditional method.
Stochastic physical synthesis for FPGAs with pre-routing interconnect uncertainty and process variation Process variation and pre-routing interconnect delay uncertainty affect timing and power for modern VLSI designs in nanometer technologies. This paper presents the first in-depth study on stochastic physical synthesis algorithms leveraging statistical static timing analysis (SSTA) with process variation and pre-routing interconnect delay uncertainty for FPGAs. Evaluated by SSTA with the placed and routed layout and measured at the same clock frequency, the stochastic clustering, placement and routing reduce the yield loss from 50 failed parts per 10 thousand parts (pp10K) for the deterministic flow to 9, 12 and 35pp10K respectively for MCNC designs. The majority of improvements are achieved during clustering and placement while routing stage has much less gain. The gain mainly comes from modeling interconnect delay uncertainty for clustering and from considering process variation for placement. When applying all stochastic algorithms concurrently, the yield loss is reduced to 5pp10K (a 10 X reduction) with the mean delay reduced by 6.2% and the standard deviation reduced by 7.5%. On the other hand, stochastic clustering with deterministic placement and routing is a good flow with little change to the entire flow, but the yield loss is reduced from 50pp10K to 9pp10K, the mean delay is reduced by 5.0%, the standard deviation is reduced by 6.4%, and the runtime is slightly reduced compared to the deterministic flow. Finally, while its improvement over timing is small, stochastic routing is able to reduce the total wire length for the same routing channel width by 4.5% and to reduce runtime by 4.2% compared to deterministic routing.
Computation and Refinement of Statistical Bounds on Circuit Delay The growing impact of within-die process variation has created the need for statistical timing analysis, where gate delays are modeled as random variables. Statistical timing analysis has traditionally suffered from exponential run time complexity with circuit size, due to arrival time dependencies created by reconverging paths in the circuit. In this paper, we propose a new approach to statistical timing analysis that is based on statistical bounds of the circuit delay. Since these bounds have linear run time complexity with circuit size, they can be computed efficiently for large circuits. Since both a lower and upper bound on the true statistical delay is available, the quality of the bounds can be determined. If the computed bounds are not sufficiently close to each other, we propose a heuristic to iteratively improve the bounds using selective enumeration of the sample space with additional run time. We demonstrate that the proposed bounds have only a small error and that by carefully selecting an small set of nodes for enumeration, this error can be further improved.
Process variation aware performance analysis of asynchronous circuits considering spatial correlation Current technology trends have led to the growing impact of process variations on performance of asynchronous circuits. As it is imperative to model process parameter variations for sub-100nm technologies to produce a more real performance metric, it is equally important to consider the correlation of these variations to increase the accuracy of the performance computation. In this paper, we present an efficient method for performance evaluation of asynchronous circuits considering inter and intra-die process variation. The proposed method includes both statistical static timing analysis (SSTA) and statistical Timed Petri-Net based simulation. Template-based asynchronous circuit has been modeled using Variant-Timed Petri-Net. Based on this model, the proposed SSTA calculates the probability density function of the delay of global critical cycle. The efficiency for the proposed SSTA is obtained from a technique that is derived from the principal component analysis (PCA) method. This technique simplifies the computation of mean, variance and covariance values of a set of correlated random variables. In order to consider spatial correlation in the Petri-Net based simulation, we also include a correlation coefficient to the proposed Variant-Timed Petri-Net which is obtained from partitioning the circuit. We also present a simulation tool of Variant-Timed Petri-Net and the results of the experiments are compared with Monte-Carlo simulation-based method.
Statistical Timing for Parametric Yield Prediction of Digital Integrated Circuits Uncertainty in circuit performance due to manufacturing and environmental variations is increasing with each new generation of technology. It is therefore important to predict the performance of a chip as a probabilistic quantity. This paper proposes three novel path-based algorithms for statistical timing analysis and parametric yield prediction of digital integrated circuits. The methods have been implemented in the context of the EinsTimer static timing analyzer. The three methods are complementary in that they are designed to target different process variation conditions that occur in practice. Numerical results are presented to study the strengths and weaknesses of these complementary approaches. Timing analysis results in the face of statistical temperature and Vdd variations are presented on an industrial ASIC part on which a bounded timing methodology leads to surprisingly wrong results
Correlation-preserved non-Gaussian statistical timing analysis with quadratic timing model Recent study shows that the existing first order canonical timing model is not sufficient to represent the dependency of the gate delay on the variation sources when processing and operational variations become more and more significant. Due to the nonlinearity of the mapping from variation sources to the gate/wire delay, the distribution of the delay is no longer Gaussian even if the variation sources are normally distributed. A novel quadratic timing model is proposed to capture the non-linearity of the dependency of gate/wire delays and arrival times on the variation sources. Systematic methodology is also developed to evaluate the correlation and distribution of the quadratic timing model. Based on these, a novel statistical timing analysis algorithm is propose which retains the complete correlation information during timing analysis and has the same computation complexity as the algorithm based on the canonical timing model. Tested on the ISCAS circuits, the proposed algorithm shows 10 × accuracy improvement over the existing first order algorithm while no significant extra runtime is needed.
Parameterized block-based statistical timing analysis with non-Gaussian parameters, nonlinear delay functions Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do not consider explicit gate delay dependence on process parameters (Liou, et al., 2001), (Orshansky, et al., 2002), (Devgan, et al., 2003), (Agarwal, et al., 2003) or restrict analysis to linear Gaussian parameters only (Visweswariah, et al., 2004), (Chang, et al., 2003). Here the authors extended the capabilities of parameterized block-based statistical STA (Visweswariah, et al., 2004) to handle nonlinear function of delays and non-Gaussian parameters, while retaining maximum efficiency of processing linear Gaussian parameters. The novel technique improves accuracy in predicting circuit timing characteristics and retains such benefits of parameterized block-based statistical STA as an incremental mode of operation, computation of criticality probabilities and sensitivities to process parameter variations. The authors' technique was implemented in an industrial statistical timing analysis tool. The experiments with large digital blocks showed both efficiency and accuracy of the proposed technique.
Statistical blockade: a novel method for very fast Monte Carlo simulation of rare circuit events, and its application Circuit reliability under statistical process variation is an area of growing concern. For highly replicated circuits such as SRAMs and flip flops, a rare statistical event for one circuit may induce a not-so-rare system failure. Existing techniques perform poorly when tasked to generate both efficient sampling and sound statistics for these rare events. Statistical Blockade is a novel Monte Carlo technique that allows us to efficiently filter---to block---unwanted samples insufficiently rare in the tail distributions we seek. The method synthesizes ideas from data mining and Extreme Value Theory, and shows speed-ups of 10X-100X over standard Monte Carlo.
A sparse grid based spectral stochastic collocation method for variations-aware capacitance extraction of interconnects under nanometer process technology In this paper, a Spectral Stochastic Collocation Method (SSCM) is proposed for the capacitance extraction of interconnects with stochastic geometric variations for nanometer process technology. The proposed SSCM has several advantages over the existing methods. Firstly, compared with the PFA (Principal Factor Analysis) modeling of geometric variations, the K-L (Karhunen-Loeve) expansion involved in SSCM can be independent of the discretization of conductors, thus significantly reduces the computation cost. Secondly, compared with the perturbation method, the stochastic spectral method based on Homogeneous Chaos expansion has optimal (exponential) convergence rate, which makes SSCM applicable to most geometric variation cases. Furthermore, Sparse Grid combined with a MST (Minimum Spanning Tree) representation is proposed to reduce the number of sampling points and the computation time for capacitance extraction at each sampling point. Numerical experiments have demonstrated that SSCM can achieve higher accuracy and faster convergence rate compared with the perturbation method.
A domain adaptive stochastic collocation approach for analysis of MEMS under uncertainties This work proposes a domain adaptive stochastic collocation approach for uncertainty quantification, suitable for effective handling of discontinuities or sharp variations in the random domain. The basic idea of the proposed methodology is to adaptively decompose the random domain into subdomains. Within each subdomain, a sparse grid interpolant is constructed using the classical Smolyak construction [S. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes of functions, Soviet Math. Dokl. 4 (1963) 240-243], to approximate the stochastic solution locally. The adaptive strategy is governed by the hierarchical surpluses, which are computed as part of the interpolation procedure. These hierarchical surpluses then serve as an error indicator for each subdomain, and lead to subdivision whenever it becomes greater than a threshold value. The hierarchical surpluses also provide information about the more important dimensions, and accordingly the random elements can be split along those dimensions. The proposed adaptive approach is employed to quantify the effect of uncertainty in input parameters on the performance of micro-electromechanical systems (MEMS). Specifically, we study the effect of uncertain material properties and geometrical parameters on the pull-in behavior and actuation properties of a MEMS switch. Using the adaptive approach, we resolve the pull-in instability in MEMS switches. The results from the proposed approach are verified using Monte Carlo simulations and it is demonstrated that it computes the required statistics effectively.
Neural networks that learn from fuzzy if-then rules An architecture for neural networks that can handle fuzzy input vectors is proposed, and learning algorithms that utilize fuzzy if-then rules as well as numerical data in neural network learning for classification problems and for fuzzy control problems are derived. The learning algorithms can be viewed as an extension of the backpropagation algorithm to the case of fuzzy input vectors and fuzzy target outputs. Using the proposed methods, linguistic knowledge from human experts represented by fuzzy if-then rules and numerical data from measuring instruments can be integrated into a single information processing system (classification system or fuzzy control system). It is shown that the scheme works well for simple examples
Attention-Based Health Monitoring. The application of mobile technologies for health monitoring has garnered great attention during the last years. The sensors together with a mobile device form a personal area network that monitors the patient's health status. It gives advice to the patient, adjusts the environmental conditions according to the patient's needs, and in the case of an emergency, notifies the patient's doctor or the corresponding medical center. In the current work the authors present a new attention-based architecture for health monitoring emphasizing on the identification of attention seeking and dangerous health states. The experimental results indicate that the proposed architecture responses very fast to the changes of the patient's biosignals and accurately in decisions concerning the patient's health status.
A game-theoretic multipath routing for video-streaming services over Mobile Ad Hoc Networks The number of portable devices capable of maintaining wireless communications has increased considerably in the last decade. Such mobile nodes may form a spontaneous self-configured network connected by wireless links to constitute a Mobile Ad Hoc Network (MANET). As the number of mobile end users grows the demand of multimedia services, such as video-streaming, in such networks is envisioned to increase as well. One of the most appropriate video coding technique for MANETs is layered MPEG-2 VBR, which used with a proper multipath routing scheme improves the distribution of video streams. In this article we introduce a proposal called g-MMDSR (game theoretic-Multipath Multimedia Dynamic Source Routing), a cross-layer multipath routing protocol which includes a game theoretic approach to achieve a dynamic selection of the forwarding paths. The proposal seeks to improve the own benefits of the users whilst using the common scarce resources efficiently. It takes into account the importance of the video frames in the decoding process, which outperforms the quality of the received video. Our scheme has proved to enhance the performance of the framework and the experience of the end users. Simulations have been carried out to show the benefits of our proposal under different situations where high interfering traffic and mobility of the nodes are present.
1.003812
0.0057
0.003419
0.003093
0.002564
0.001611
0.001257
0.000812
0.000159
0.000015
0
0
0
0
Stochastic Power Grid Analysis Considering Process Variations In this paper, we investigate the impact of interconnect and device process variations on voltage fluctuations in power grids. We consider random variations in the power grid's electrical parameters as spatial stochastic processes and propose a new and efficient method to compute the stochastic voltage response of the power grid. Our approach provides an explicit analytical representation of the stochastic voltage response using orthogonal polynomials in a Hilbert space. The approach has been implemented in a prototype software called OPERA (Orthogonal Polynomial Expansions for Response Analysis). Use of OPERA on industrial power grids demonstrated speed-ups of up to two orders of magnitude. The results also show a significant variation of about 卤 35% in the nominal voltage drops at various nodes of the power grids and demonstrate the need for variation-aware power grid analysis.
Statistical Analysis Of Power Grid Networks Considering Lognormal Leakage Current Variations With Spatial Correlation As the technology scales into 90nm and below, process-induced variations become more pronounced. In this paper, we propose an efficient stochastic method for analyzing the voltage drop variations of on-chip power grid networks ' considering log-normal leakage current variations with spatial correlation. The new analysis is based on the Hermite polynomial chaos (PC) representation of random processes. Different from the existing Hermite PC based method for power grid analysis, which models all the random variations as Gaussian processes without considering spatial correlation. The new method focuses on the impacts of stochastic sub-threshold leakage currents, which are modeled as log-normal distribution random variables, on the power grid voltage variations. To consider the spatial correlation, we apply orthogonal decomposition to map the correlated random variables into independent variables. Our experiment results show that the new method is more accurate than the Gaussian-only Hermite PC method using the Taylor expansion method for analyzing leakage current variations, and two orders of magnitude faster than the Monte Carlo method with small variance errors. We also show that the spatial correlation may lead to large errors if not being considered in the statistical analysis.
A circuit level fault model for resistive bridges Delay faults are an increasingly important test challenge. Modeling bridge faults as delay faults helps delay tests to detect more bridge faults. Traditional bridge fault models are incomplete because these models only model the logic faults or these models are not efficient to use in delay tests for large circuits. In this article, we propose a physically realistic yet economical resistive bridge fault model to model delay faults as well as logic faults. An accurate yet simple delay calculation method is proposed. We also enumerate all possible fault behaviors and present the relationship between input patterns and output behaviors, which is useful in ATPG. Our fault simulation results show the benefit of at-speed tests.
Model-based reliability analysis Modeling, in conjunction with testing, is a rich source of insight. Model parameters are easily controlled and monitoring can be done unobtrusively. The ability to inject faults without otherwise affecting performance is particularly critical. Many iterations can be done quickly with a model while varying parameters and conditions based on a small number of validation tests. The objective of model-based reliability analysis (MBRA) is to identify ways to capitalize on the insights gained from modeling to make both qualitative and quantitative statements about product reliability. MBRA is developed and exercised in the realm of weapon system development and maintenance, where the challenges of severe environmental requirements, limited production quantities, and use of oneshot devices can make testing prohibitively expensive. However, the general principles are also applicable to other product types
Practical Implementation of Stochastic Parameterized Model Order Reduction via Hermite Polynomial Chaos This paper describes the stochastic model order reduction algorithm via stochastic Hermite polynomials from the practical implementation perspective. Comparing with existing work on stochastic interconnect analysis and parameterized model order reduction, we generalized the input variation representation using polynomial chaos (PC) to allow for accurate modeling of non-Gaussian input variations. We also explore the implicit system representation using sub-matrices and improved the efficiency for solving the linear equations utilizing block matrix structure of the augmented system. Experiments show that our algorithm matches with Monte Carlo methods very well while keeping the algorithm effective. And the PC representation of non-Gaussian variables gains more accuracy than Taylor representation used in previous work (Wang et al., 2004).
Sparse transformations and preconditioners for 3-D capacitance extraction Three-dimensional (3-D) capacitance-extraction algorithms are important due to their high accuracy. However, the current 3-D algorithms are slow and thus their application is limited. In this paper, we present a novel method to significantly speed up capacitance-extraction algorithms based on boundary element methods (BEMs), under uniform and multiple dielectrics. The n×n coefficient matrix in the BEM is dense, even when approximated with the fast multipole method or hierarchical-refinement method, where n is the number of panels needed to discretize the conductor surfaces and dielectric interfaces. As a result, effective preconditioners are hard to obtain and iterative solvers converge slowly. In this paper, we introduce a linear transformation to convert the n×n dense coefficient matrix into a sparse matrix with O(n) nonzero entries, and then use incomplete factorization to produce a very effective preconditioner. For the k×k bus-crossing benchmark, our method requires at most four iterations, whereas previous best methods such as FastCap and HiCap require 10-20 iterations. As a result, our algorithm is up to 70 times faster than FastCap and up to 2 times faster than HiCap on these benchmarks. Additional experiments illustrate that our method consistently outperforms previous best methods by a large magnitude on complex industrial problems with multiple dielectrics.
Efficient large-scale power grid analysis based on preconditioned krylov-subspace iterative methods In this paper, we propose preconditioned Krylov-subspace iterative methods to perform efficient DC and transient simulations for large-scale linear circuits with an emphasis on power delivery circuits. We also prove that a circuit with inductors can be simplified from MNA to NA format, and the matrix becomes an s.p.d matrix. This property makes it suitable for the conjugate gradient with incomplete Cholesky decomposition as the preconditioner, which is faster than other direct and iterative methods. Extensive experimental results on large-scale industrial power grid circuits show that our method is over 200 times faster for DC analysis and around 10 times faster for transient simulation compared to SPICE3. Furthermore, our algorithm reduces over 75% of memory usage than SPICE3 while the accuracy is not compromised.
Fast Variational Analysis of On-Chip Power Grids by Stochastic Extended Krylov Subspace Method This paper proposes a novel stochastic method for analyzing the voltage drop variations of on-chip power grid networks, considering lognormal leakage current variations. The new method, called StoEKS, applies Hermite polynomial chaos to represent the random variables in both power grid networks and input leakage currents. However, different from the existing orthogonal polynomial-based stochastic simulation method, extended Krylov subspace (EKS) method is employed to compute variational responses from the augmented matrices consisting of the coefficients of Hermite polynomials. Our contribution lies in the acceleration of the spectral stochastic method using the EKS method to fast solve the variational circuit equations for the first time. By using the reduction technique, the new method partially mitigates increased circuit-size problem associated with the augmented matrices from the Galerkin-based spectral stochastic method. Experimental results show that the proposed method is about two-order magnitude faster than the existing Hermite PC-based simulation method and many order of magnitudes faster than Monte Carlo methods with marginal errors. StoEKS is scalable for analyzing much larger circuits than the existing Hermit PC-based methods.
Recycling Krylov Subspaces for Sequences of Linear Systems Many problems in science and engineering require the solution of a long sequence of slowly changing linear systems. We propose and analyze two methods that significantly reduce the total number of matrix-vector products required to solve all systems. We consider the general case where both the matrix and right-hand side change, and we make no assumptions regarding the change in the right-hand sides. Furthermore, we consider general nonsingular matrices, and we do not assume that all matrices are pairwise close or that the sequence of matrices converges to a particular matrix. Our methods work well under these general assumptions, and hence form a significant advancement with respect to related work in this area. We can reduce the cost of solving subsequent systems in the sequence by recycling selected subspaces generated for previous systems. We consider two approaches that allow for the continuous improvement of the recycled subspace at low cost. We consider both Hermitian and non-Hermitian problems, and we analyze our algorithms both theoretically and numerically to illustrate the effects of subspace recycling. We also demonstrate the effectiveness of our algorithms for a range of applications from computational mechanics, materials science, and computational physics.
Efficient Iterative Time Preconditioners for Harmonic Balance RF Circuit Simulation Efficient iterative time preconditioners for Krylov-based harmonic balance circuit simulators are proposed. Some numerical experiments assess their performance relative to the well-known block-diagonal frequency preconditioner and the previously proposed time preconditioned.
Iterative Hard Thresholding for Compressed Sensing Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper)•It gives near-optimal error guarantees.•It is robust to observation noise.•It succeeds with a minimum number of observations.•It can be used with any sampling operator for which the operator and its adjoint can be computed.•The memory requirement is linear in the problem size.•Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint.•It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal.•Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
Possibility Theory in Constraint Satisfaction Problems: Handling Priority, Preference and Uncertainty In classical Constraint Satisfaction Problems (CSPs) knowledge is embedded in a set of hard constraints, each one restricting the possible values of a set of variables. However constraints in real world problems are seldom hard, and CSP's are often idealizations that do not account for the preference among feasible solutions. Moreover some constraints may have priority over others. Lastly, constraints may involve uncertain parameters. This paper advocates the use of fuzzy sets and possibility theory as a realistic approach for the representation of these three aspects. Fuzzy constraints encompass both preference relations among possible instanciations and priorities among constraints. In a Fuzzy Constraint Satisfaction Problem (FCSP), a constraint is satisfied to a degree (rather than satisfied or not satisfied) and the acceptability of a potential solution becomes a gradual notion. Even if the FCSP is partially inconsistent, best instanciations are provided owing to the relaxation of some constraints. Fuzzy constraints are thus flexible. CSP notions of consistency and k-consistency can be extended to this framework and the classical algorithms used in CSP resolution (e.g., tree search and filtering) can be adapted without losing much of their efficiency. Most classical theoretical results remain applicable to FCSPs. In the paper, various types of constraints are modelled in the same framework. The handling of uncertain parameters is carried out in the same setting because possibility theory can account for both preference and uncertainty. The presence of uncertain parameters lead to ill-defined CSPs, where the set of constraints which defines the problem is not precisely known.
A unified framework of opening and closure operators with respect to arbitrary fuzzy relations This paper is devoted to a general concept of openness and closedness with respect to arbitrary fuzzy relations – along with appropriate opening and closure operators. It is shown that the proposed framework unifies existing concepts, in particular, the one for fuzzy preorderings as well as the triangular norm-based approach to fuzzy mathematical morphology.
SPECO: Stochastic Perturbation based Clock tree Optimization considering temperature uncertainty Modern computing system applications or workloads can bring significant non-uniform temperature gradient on-chip, and hence can cause significant temperature uncertainty during clock-tree synthesis. Existing designs of clock-trees have to assume a given time-invariant worst-case temperature map but cannot deal with a set of temperature maps under a set of workloads. For robust clock-tree synthesis considering temperature uncertainty, this paper presents a new problem formulation: Stochastic PErturbation based Clock Optimization (SPECO). In SPECO algorithm, one nominal clock-tree is pre-synthesized with determined merging points. The impact from the stochastic temperature variation is modeled by perturbation (or small physical displacement) of merging points to offset the induced skews. Because the implementation cost is reduced but the design complexity is increased, the determination of optimal positions of perturbed merging points requires a computationally efficient algorithm. In this paper, one Non-Monte-Carlo (NMC) method is deployed to generate skew and skew variance by one-time analysis when a set of stochastic temperature maps is already provided. Moreover, one principal temperature-map analysis is developed to reduce the design complexity by clustering correlated merging points based on the subspace of the correlation matrix. As a result, the new merging points can be efficiently determined level by level with both skew and its variance reduced. The experimental results show that our SPECO algorithm can effectively reduce the clock-skew and its variance under a number of workloads with minimized wire-length overhead and computational cost.
1.014792
0.012447
0.011662
0.011662
0.00505
0.003947
0.00189
0.000386
0.000177
0.000032
0
0
0
0
Multimodal communication in animals, humans and robots: An introduction to perspectives in brain-inspired informatics. Recent years have seen convergence in research on brain mechanisms and neurocomputational approaches, culminating in the creation of a new generation of robots whose artificial “brains” respect neuroscience principles and whose “cognitive” systems venture into higher cognitive domains such as planning and action sequencing, complex object and concept processing, and language. The present article gives an overview of selected projects in this general multidisciplinary field.
Towards situated speech understanding: visual context priming of language models Fuse is a situated spoken language understanding system that uses visual context to steer the interpretation of speech. Given a visual scene and a spoken description, the system finds the object in the scene that best fits the meaning of the description. To solve this task, Fuse performs speech recognition and visually-grounded language understanding. Rather than treat these two problems separately, knowledge of the visual semantics of language and the specific contents of the visual scene are fused during speech processing. As a result, the system anticipates various ways a person might describe any object in the scene, and uses these predictions to bias the speech recognizer towards likely sequences of words. A dynamic visual attention mechanism is used to focus processing on likely objects within the scene as spoken utterances are processed. Visual attention and language prediction reinforce one another and converge on interpretations of incoming speech signals which are most consistent with visual context. In evaluations, the introduction of visual context into the speech recognition process results in significantly improved speech recognition and understanding accuracy. The underlying principles of this model may be applied to a wide range of speech understanding problems including mobile and assistive technologies in which contextual information can be sensed and semantically interpreted to bias processing.
Embodied Language Understanding with a Multiple Timescale Recurrent Neural Network How the human brain understands natural language and what we can learn for intelligent systems is open research. Recently, researchers claimed that language is embodied in most — if not all — sensory and sensorimotor modalities and that the brain's architecture favours the emergence of language. In this paper we investigate the characteristics of such an architecture and propose a model based on the Multiple Timescale Recurrent Neural Network, extended by embodied visual perception. We show that such an architecture can learn the meaning of utterances with respect to visual perception and that it can produce verbal utterances that correctly describe previously unknown scenes.
The grounding of higher order concepts in action and language: A cognitive robotics model. In this paper we present a neuro-robotic model that uses artificial neural networks for investigating the relations between the development of symbol manipulation capabilities and of sensorimotor knowledge in the humanoid robot iCub. We describe a cognitive robotics model in which the linguistic input provided by the experimenter guides the autonomous organization of the robot’s knowledge. In this model, sequences of linguistic inputs lead to the development of higher-order concepts grounded on basic concepts and actions. In particular, we show that higher-order symbolic representations can be indirectly grounded in action primitives directly grounded in sensorimotor experiences. The use of recurrent neural network also permits the learning of higher-order concepts based on temporal sequences of action primitives. Hence, the meaning of a higher-order concept is obtained through the combination of basic sensorimotor knowledge. We argue that such a hierarchical organization of concepts can be a possible account for the acquisition of abstract words in cognitive robots.
Emergence Of Functional Hierarchy In A Multiple Timescale Neural Network Model: A Humanoid Robot Experiment It is generally thought that skilled behavior in human beings results from a functional hierarchy of the motor control system, within which reusable motor primitives are flexibly integrated into various sensori-motor sequence patterns. The underlying neural mechanisms governing the way in which continuous sensori-motor flows are segmented into primitives and the way in which series of primitives are integrated into various behavior sequences have, however, not yet been clarified. In earlier studies, this functional hierarchy has been realized through the use of explicit hierarchical structure, with local modules representing motor primitives in the lower level and a higher module representing sequences of primitives switched via additional mechanisms such as gate-selecting. When sequences contain similarities and overlap, however, a conflict arises in such earlier models between generalization and segmentation, induced by this separated modular structure. To address this issue, we propose a different type of neural network model. The current model neither makes use of separate local modules to represent primitives nor introduces explicit hierarchical structure. Rather than forcing architectural hierarchy onto the system, functional hierarchy emerges through a form of self-organization that is based on two distinct types of neurons, each with different time properties ("multiple timescales"). Through the introduction of multiple timescales, continuous sequences of behavior are segmented into reusable primitives, and the primitives, in turn, are flexibly integrated into novel sequences. In experiments, the proposed network model, coordinating the physical body of a humanoid robot through high-dimensional sensori-motor control, also successfully situated itself within a physical environment. Our results suggest that it is not only the spatial connections between neurons but also the timescales of neural activity that act as important mechanisms leading to functional hierarchy in neural systems.
Variations, margins, and statistics Design margining is used to account for design uncertainties in the measurement of performance, and thereby ensures that actual manufactured parts will operate in within predicted bounds. As process and environmental variations become increasingly severe and complex in nanometer process technology, design margining overheads have increased correspondingly. This paper describes the types of process and environmental variations, their impact on performance, and the traditional design margining process used to account for these uncertainties. We consider statistical timing (SSTA) in the context of its ability to reduce timing margins through more accurate modeling of variations, and quantify potential benefits of SSTA for setup and hold time margin reduction. Combining SSTA with complementary techniques for systematic variation-aware and voltage-variation-aware timing provides meaningful design margin reduction. We introduce the concept of activity based operating condition as a supporting construct for variation-aware STA flows
NIST Net: a Linux-based network emulation tool Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.
Compressive wireless sensing General Terms Compressive Sampling is an emerging theory that is based on the fact that a relatively small number of random pro-jections of a signal can contain most of its salient informa-tion. In this paper, we introduce the concept of Compressive Wireless Sensing for sensor networks in which a fusion center retrieves signal field information from an ensemble of spa-tially distributed sensor nodes. Energy and bandwidth are scarce resources in sensor networks and the relevant metrics of interest in our context are 1) the latency involved in in-formation retrieval; and 2) the associated power-distortion trade-o. It is generally recognized that given su cient prior knowledge about the sensed data (e. g., statistical character-ization, homogeneity etc. ), there exist schemes that have very favorable power-distortion-latency trade-o s. We pro-pose a distributed matched source-channel communication scheme, based in part on recent results in compressive sam-pling theory, for estimation of sensed data at the fusion cen-ter and analyze, as a function of number of sensor nodes, the trade-o s between power, distortion and latency. Compres-sive wireless sensing is a universal scheme in the sense that it requires no prior knowledge about the sensed data. This universality, however, comes at the cost of optimality (in terms of a less favorable power-distortion-latency trade-o ) and we quantify this cost relative to the case when su cient prior information about the sensed data is assumed.
On the quasi-Monte Carlo method with Halton points for elliptic PDEs with log-normal diffusion. This article is dedicated to the computation of the moments of the solution to elliptic partial differential equations with random, log-normally distributed diffusion coefficients by the quasi-Monte Carlo method. Our main result is that the convergence rate of the quasi-Monte Carlo method based on the Halton sequence for the moment computation depends only linearly on the dimensionality of the stochastic input parameters. In particular, we attain this rather mild dependence on the stochastic dimensionality without any randomization of the quasi-Monte Carlo method under consideration. For the proof of the main result, we require related regularity estimates for the solution and its powers. These estimates are also provided here. Numerical experiments are given to validate the theoretical findings.
Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego, CA, USA, June 7-11, 2004
Compressive sampling for streaming signals with sparse frequency content Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Stream- ing Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and im- plementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for real-time system im- plementations. Our experiment results on very long signals demonstrate the good performance of the SGP and validate our approach.
QoE Aware Service Delivery in Distributed Environment Service delivery and customer satisfaction are strongly related items for a correct commercial management platform. Technical aspects targeting this issue relate to QoS parameters that can be handled by the platform, at least partially. Subjective psychological issues and human cognitive aspects are typically unconsidered aspects and they directly determine the Quality of Experience (QoE). These factors finally have to be considered as key input for a successful business operation between a customer and a company. In our work, a multi-disciplinary approach is taken to propose a QoE interaction model based on the theoretical results from various fields including pyschology, cognitive sciences, sociology, service ecosystem and information technology. In this paper a QoE evaluator is described for assessing the service delivery in a distributed and integrated environment on per user and per service basis.
A model to perform knowledge-based temporal abstraction over multiple signals In this paper we propose the Multivariable Fuzzy Temporal Profile model (MFTP), which enables the projection of expert knowledge on a physical system over a computable description. This description may be used to perform automatic abstraction on a set of parameters that represent the temporal evolution of the system. This model is based on the constraint satisfaction problem (CSP)formalism, which enables an explicit representation of the knowledge, and on fuzzy set theory, from which it inherits the ability to model the imprecision and uncertainty that are characteristic of human knowledge vagueness. We also present an application of the MFTP model to the recognition of landmarks in mobile robotics, specifically to the detection of doors on ultrasound sensor signals from a Nomad 200 robot.
Fuzzy optimization of units products in mix-product selection problem using fuzzy linear programming approach In this paper, the modified S-curve membership function methodology is used in a real life industrial problem of mix product selection. This problem occurs in the production planning management where by a decision maker plays important role in making decision in an uncertain environment. As analysts, we try to find a good enough solution for the decision maker to make a final decision. An industrial application of fuzzy linear programming (FLP) through the S-curve membership function has been investigated using a set of real life data collected from a Chocolate Manufacturing Company. The problem of fuzzy product mix selection has been defined. The objective of this paper is to find an optimal units of products with higher level of satisfaction with vagueness as a key factor. Since there are several decisions that were to be taken, a table for optimal units of products respect to vagueness and degree of satisfaction has been defined to identify the solution with higher level of units of products and with a higher degree of satisfaction. The fuzzy outcome shows that higher units of products need not lead to higher degree of satisfaction. The findings of this work indicates that the optimal decision is depend on vagueness factor in the fuzzy system of mix product selection problem. Further more the high level of units of products obtained when the vagueness is low.
1.2105
0.2105
0.2105
0.0735
0.032786
0
0
0
0
0
0
0
0
0
On intuitionistic gradation of openness In this paper, we introduce a concept of intuitionistic gradation of openness on fuzzy subsets of a nonempty set X and define an intuitionistic fuzzy topological space. We prove that the category of intuitionistic fuzzy topological spaces and gradation preserving mappings is a topological category. We study compactness of intuitionistic fuzzy topological spaces and prove an analogue of Tychonoff's theorem.
Vague sets are intuitionistic fuzzy sets We recapitulate the definition given by Atanassov (1983) of intuitionistic fuzzy sets as well as the definition of vague sets given by Gau and Byehrer (1993) and see that both definitions coincide.
Regranulation: A granular algorithm enabling communication between granular worlds In this paper, we describe a granular algorithm for translating information between two granular worlds, represented as fuzzy rulebases. These granular worlds are defined on the same universe of discourse, but employ different granulations of this universe. In order to translate information from one granular world to the other, we must regranulate the information so that it matches the information granularity of the target world. This is accomplished through the use of a first-order interpolation algorithm, implemented using linguistic arithmetic, a set of elementary granular computing operations. We first demonstrate this algorithm by studying the common “fuzzy-PD” rulebase at several different granularities, and conclude that the “3×3” granulation may be too coarse for this objective. We then examine the question of what the “natural” granularity of a system might be; this is studied through a 10-fold cross-validation experiment involving three different granulations of the same underlying mapping. For the problem under consideration, we find that a 7×7 granulation appears to be the minimum necessary precision.
Reduction and axiomization of covering generalized rough sets This paper investigates some basic properties of covering generalized rough sets, and their comparison with the corresponding ones of Pawlak's rough sets, a tool for data mining. The focus here is on the concepts and conditions for two coverings to generate the same covering lower approximation or the same covering upper approximation. The concept of reducts of coverings is introduced and the procedure to find a reduct for a covering is given. It has been proved that the reduct of a covering is the minimal covering that generates the same covering lower approximation or the same covering upper approximation, so this concept is also a technique to get rid of redundancy in data mining. Furthermore, it has been shown that covering lower and upper approximations determine each other. Finally, a set of axioms is constructed to characterize the covering lower approximation operation.
Relationships among three types of covering rough sets Rough sets, a technique of granular computing, deal with the vagueness and granularity in information systems. They are based on equivalence relations on a set, or equivalently, on a partition on the set. Covering is an extension of a partition and a more feasible concept for coping with incompleteness in information, thus the classical rough sets based on partition are extended to covering based rough sets. When a covering is introduced, there are more than one possibility to define the upper approximation. It is. necessary to study the properties of these different types of upper approximations and the relationships among them. This paper presents three kinds of covering generalized rough sets and explores the relationships among them. The main results are conditions under which two different types of upper approximation operations are identical.
Towards general measures of comparison of objects We propose a classification of measures enabling to compare fuzzy characterizations of objects, according to their properties and the purpose of their utilization. We establish the difference between measures of satisfiability, resemblance, inclusion and dissimilarity. We base our study on concepts analogous to those developed by A. Tversky for his general work on similarities.
Subsethood, entropy, and cardinality for interval-valued fuzzy sets---An algebraic derivation In this paper a unified formulation of subsethood, entropy, and cardinality for interval-valued fuzzy sets (IVFSs) is presented. An axiomatic skeleton for subsethood measures in the interval-valued fuzzy setting is proposed, in order for subsethood to reduce to an entropy measure. By exploiting the equivalence between the structures of IVFSs and Atanassov's intuitionistic fuzzy sets (A-IFSs), the notion of average possible cardinality is presented and its connection to least and biggest cardinalities, proposed in [E. Szmidt, J. Kacprzyk, Entropy for intuitionistic fuzzy sets, Fuzzy Sets and Systems 118 (2001) 467-477], is established both algebraically and geometrically. A relation with the cardinality of fuzzy sets (FSs) is also demonstrated. Moreover, the entropy-subsethood and interval-valued fuzzy entropy theorems are stated and algebraically proved, which generalize the work of Kosko [Fuzzy entropy and conditioning, Inform. Sci. 40(2) (1986) 165-174; Fuzziness vs. probability, International Journal of General Systems 17(2-3) (1990) 211-240; Neural Networks and Fuzzy Systems, Prentice-Hall International, Englewood Cliffs, NJ, 1992; Intuitionistic Fuzzy Sets: Theory and Applications, Vol. 35 of Studies in Fuzziness and Soft Computing, Physica-Verlag, Heidelberg, 1999] for FSs. Finally, connections of the proposed subsethood and entropy measures for IVFSs with corresponding definitions for FSs and A-IFSs are provided.
Perceptual reasoning for perceptual computing: a similarity-based approach Perceptual reasoning (PR) is an approximate reasoning method that can be used as a computing-with-words (CWW) engine in perceptual computing. There can be different approaches to implement PR, e.g., firing-interval-based PR (FI-PR), which has been proposed in J. M. Mendel and D. Wu, IEEE Trans. Fuzzy Syst., vol. 16, no. 6, pp. 1550-1564, Dec. 2008 and similarity-based PR (SPR), which is proposed in this paper. Both approaches satisfy the requirement on a CWW engine that the result of combining fired rules should lead to a footprint of uncertainty (FOU) that resembles the three kinds of FOUs in a CWW codebook. A comparative study shows that S-PR leads to output FOUs that resemble word FOUs, which are obtained from subject data, much more closely than FI-PR; hence, S-PR is a better choice for a CWW engine than FI-PR.
Lattices of fuzzy sets and bipolar fuzzy sets, and mathematical morphology Mathematical morphology is based on the algebraic framework of complete lattices and adjunctions, which endows it with strong properties and allows for multiple extensions. In particular, extensions to fuzzy sets of the main morphological operators, such as dilation and erosion, can be done while preserving all properties of these operators. Another extension concerns bipolar fuzzy sets, where both positive information and negative information are handled, along with their imprecision. We detail these extensions from the point of view of the underlying lattice structure. In the case of bipolarity, its two-components nature raises the question of defining a proper partial ordering. In this paper, we consider Pareto (component-wise) and lexicographic orderings.
Applying a direct multi-granularity linguistic and strategy-oriented aggregation approach on the assessment of supply performance Supply performance has the active continuity behaviors, which covers the past, present and future of time horizons. Thus, supply performance possesses distinct uncertainty on individual behavior, which is inadequate to assess with quantification. This study utilizes the linguistic variable instead of numerical variable to offset the inaccuracy on quantification, and employs the fitting linguistic scale in accordance with the characteristic of supply behavior to enhance the applicability. Furthermore, the uniformity is introduced to transform the linguistic information uniformly from different scales. Finally, the linguistic ordered weighted averaging operator with maximal entropy applies in direct to aggregate the combination of linguistic information and product strategy to ensure the assessment results meeting the enterprise requirements, and then to emulate mental decision making in humans by the linguistic manner.
A simple method of forecasting based on fuzzy time series In fuzzy time series forecasting various methods have been developed to establish the fuzzy relations on time series data having linguistic values for forecasting the future values. However, the major problem in fuzzy time series forecasting is the accuracy in the forecasted values. The present paper proposes a new method of fuzzy time series forecasting based on difference parameters. The proposed method is a simplified computational approach for the forecasting. The method has been implemented on the historical enrollment data of University of Alabama (adapted by Song and Chissom) and the forecasted values have been compared with the results of the existing methods to show is superiority. Further, the proposed method has also been implemented on a real life problem of crop production forecast of wheat crop and the results have been compared with other methods.
Random Projections for Manifold Learning We propose a novel method for linear dimensionality reduction of manifold mod- eled data. First, we show that with a small number M of random projectionsof sample points in RN belonging to an unknown K-dimensional Euclidean mani- fold, the intrinsic dimension (ID) of the sample set can be estimated to high accu- racy. Second, we rigorously prove that using only this set of random projections, we can estimate the structure of the underlying manifold. In both cases, the num- ber of random projections required is linear in K and logarithmic in N , meaning that K < M ≪ N. To handle practical situations, we develop a greedy algorithm to estimate the smallest size of the projection space requir ed to perform manifold learning. Our method is particularly relevant in distribut ed sensing systems and leads to significant potential savings in data acquisition, storage and transmission costs.
The application of compressed sensing for photo-acoustic tomography. Photo-acoustic (PA) imaging has been developed for different purposes, but recently, the modality has gained interest with applications to small animal imaging. As a technique it is sensitive to endogenous optical contrast present in tissues and, contrary to diffuse optical imaging, it promises to bring high resolution imaging for in vivo studies at midrange depths (3-10 mm). Because of the limited amount of radiation tissues can be exposed to, existing reconstruction algorithms for circular tomography require a great number of measurements and averaging, implying long acquisition times. Time-resolved PA imaging is therefore possible only at the cost of complex and expensive electronics. This paper suggests a new reconstruction strategy using the compressed sensing formalism which states that a small number of linear projections of a compressible image contain enough information for reconstruction. By directly sampling the image to recover in a sparse representation, it is possible to dramatically reduce the number of measurements needed for a given quality of reconstruction.
An image super-resolution scheme based on compressive sensing with PCA sparse representation Image super-resolution (SR) reconstruction has been an important research fields due to its wide applications. Although many SR methods have been proposed, there are still some problems remain to be solved, and the quality of the reconstructed high-resolution (HR) image needs to be improved. To solve these problems, in this paper we propose an image super-resolution scheme based on compressive sensing theory with PCA sparse representation. We focus on the measurement matrix design of the CS process and the implementation of the sparse representation function for the PCA transformation. The measurement matrix design is based on the relation between the low-resolution (LR) image and the reconstructed high-resolution (HR) image. While the implementation of the PCA sparse representation function is based on the PCA transformation process. According to whether the covariance matrix of the HR image is known or not, two kinds of SR models are given. Finally the experiments comparing the proposed scheme with the traditional interpolation methods and CS scheme with DCT sparse representation are conducted. The experiment results both on the smooth image and the image with complex textures show that the proposed scheme in this paper is effective.
1.116295
0.006011
0.000931
0.000493
0.00039
0.000122
0.000025
0.000008
0.000002
0
0
0
0
0
A concise review of the quality of experience assessment for video streaming. •Concise and up-to-date review of quality assessment for video streaming services.•Description of a typical video assessment process.•Analysis of current research on subjective, objective, and hybrid QoE assesment.•Discussion of future trends and challenges for QoE in video streaming services.
An example of real time QoE IPTV service estimator This paper will consider an estimator which includes mathematical modelling of physical channel parameters as information carrier and the weakest links in the telecommunication chain of information transfer. It will also identify necessary physical layer parameters which influence the quality of multimedia service delivery or QoE (Quality of Experience). With the modelling of the above mentioned parameters, the relation between degradations will be defined which appear in the channel between the user and the central telecommunication equipment with domination of one media used for information transfer with certain error probability. Degradations in a physical channel can be noticed by observing the change in values of channel transfer function or the appearance of increased noise. Estimation of QoE IPTV (Internet Protocol Television) service is especially necessary during delivery of real time service. In that case the mentioned degradations may appear in any moment and cause a packet loss.
The Impact Of Interactivity On The Qoe: A Preliminary Analysis The interactivity in multimedia services concerns the input/output process of the user with the system, as well as its cooperativity. It is an important element that affects the overall Quality of Experience (QoE), which may even mask the impact of the quality level of the (audio and visual) signal itself on the overall user perception. This work is a preliminary study aimed at evaluating the weight of the interactivity, which relies on subjective assessments that have been conducted varying the artefacts, genre and interactivity features on video streaming services evaluated by the subjects. Subjective evaluations have been collected from 25 subjects in compliance with ITU-T Recommendation P. 910 through single-stimulus Absolute Category Rating (ACR). It resulted that the impact of the interactivity is influenced by the presence of other components, such as presence of buffer starvations and type of content displayed. An objective quality metric able to measure the influence of the interactivity on the QoE has also been defined, which has proved to be highly correlated with subjective results. We concluded that the interactivity feature can be successfully represented by either an additive or a multiplicative component to be added in existing quality metrics.
Live transcoding and streaming-as-a-service with MPEG-DASH Multimedia content delivery and real-time streaming over the top of the existing infrastructure is nowadays part and parcel of every media ecosystem thanks to open standards and the adoption of the Hypertext Transfer Protocol (HTTP) as its primary mean for transportation. Hardware encoder manufacturers have adopted their product lines to support the dynamic adaptive streaming over HTTP but suffer from the inflexibility to provide scalability on demand, specifically for event-based live services that are only offered for a limited period of time. The cloud computing paradigm allows for this kind of flexibility and provide the necessary elasticity in order to easily scale with the demand required for such use case scenarios. In this paper we describe bitcodin, our transcoding and streaming-as-as-ervice platform based on open standards (i.e., MPEG-DASH) which is deployed on standard cloud and content delivery infrastructures to enable high-quality streaming to heterogeneous clients. It is currently deployed for video on demand, 24/7 live, and event-based live services using bitdash, our adaptive client framework.
QoE Evaluation of Multimedia Services Based on Audiovisual Quality and User Interest. Quality of experience (QoE) has significant influence on whether or not a user will choose a service or product in the competitive era. For multimedia services, there are various factors in a communication ecosystem working together on users, which stimulate their different senses inducing multidimensional perceptions of the services, and inevitably increase the difficulty in measurement and estim...
QoE-oriented 3D video transcoding for mobile streaming With advance in mobile 3D display, mobile 3D video is already enabled by the wireless multimedia networking, and it will be gradually popular since it can make people enjoy the natural 3D experience anywhere and anytime. In current stage, mobile 3D video is generally delivered over the heterogeneous network combined by wired and wireless channels. How to guarantee the optimal 3D visual quality of experience (QoE) for the mobile 3D video streaming is one of the important topics concerned by the service provider. In this article, we propose a QoE-oriented transcoding approach to enhance the quality of mobile 3D video service. By learning the pre-controlled QoE patterns of 3D contents, the proposed 3D visual QoE inferring model can be utilized to regulate the transcoding configurations in real-time according to the feedbacks of network and user-end device information. In the learning stage, we propose a piecewise linear mean opinion score (MOS) interpolation method to further reduce the cumbersome manual work of preparing QoE patterns. Experimental results show that the proposed transcoding approach can provide the adapted 3D stream to the heterogeneous network, and further provide superior QoE performance to the fixed quantization parameter (QP) transcoding and mean squared error (MSE) optimized transcoding for mobile 3D video streaming.
Logarithmic laws in service quality perception: where microeconomics meets psychophysics and quality of experience Utility functions, describing the value of a good or a resource from an end user's point of view, are widely used as an important ingredient for all sorts of microeconomic models. In the context of resource allocation in communication networks, a logarithmic version of utility usually serves as the standard example due to its simplicity and mathematical tractability. In this paper we argue that indeed there are much more (and better) reasons to consider logarithmic utilities as really paradigmatic, at least when it comes to characterizing user experience with specific telecommunication services. We justify this claim with the help of recent results from Quality of Experience (QoE) research, and demonstrate that, especially for Voice-over-IP and mobile broadband scenarios, there is increasing evidence that user experience and satisfaction follows logarithmic laws. Finally, we go even one step further and put these results into the broader context of the Weber-Fechner Law, a key principle in psychophysics describing the general relationship between the magnitude of a physical stimulus and its perceived intensity within the human sensory system.
Utilizing buffered YouTube playtime for QoE-oriented scheduling in OFDMA networks With the introduction of 4th generation mobile networks, applications such as high-quality video streaming to the end user becomes possible. However, the expected demand for such services outpaces the capacity increase of the networks. Since there is mostly a capacity bottleneck in the air interface between a base station and user equipment, one of the main challenges for radio resource management is therefore to enforce precise quality guarantees for users with high expectations on service quality. We consider, in this paper, an OFDMA access network with YouTube users, and address the challenge of improving the quality of experience (QoE) of a dedicated user by utilizing the buffered playtime of a YouTube video for scheduling. The advantage of this approach is that scheduling is done according to the instantaneous throughput requirement of the end user application, and not by the network by maintaining average quality-of-service (QoS) parameters. The paper describes the concept and provides a simulative evaluation of the approach in an LTE network to demonstrate the benefits.
Standardization activities in the ITU for a QoE assessment of IPTV This article gives an overview of the state of the art of objective quality assessment of audio and visual media and its standardization activities in the ITU. IPTV services are becoming one of the most promising applications over next generation networks. To provide end users with comfortable, stable, and economical services, QoE assessment methodologies for quality design and management are indispensable.
On multi-granular fuzzy linguistic modeling in group decision making problems: A systematic review and future trends. The multi-granular fuzzy linguistic modeling allows the use of several linguistic term sets in fuzzy linguistic modeling. This is quite useful when the problem involves several people with different knowledge levels since they could describe each item with different precision and they could need more than one linguistic term set. Multi-granular fuzzy linguistic modeling has been frequently used in group decision making field due to its capability of allowing each expert to express his/her preferences using his/her own linguistic term set. The aim of this research is to provide insights about the evolution of multi-granular fuzzy linguistic modeling approaches during the last years and discuss their drawbacks and advantages. A systematic literature review is proposed to achieve this goal. Additionally, some possible approaches that could improve the current multi-granular linguistic methodologies are presented.
A New Parallel Kernel-Independent Fast Multipole Method We present a new adaptive fast multipole algorithm and its parallel implementation. The algorithm is kernel-independent in the sense that the evaluation of pairwise interactions does not rely on any analytic expansions, but only utilizes kernel evaluations. The new method provides the enabling technology for many important problems in computational science and engineering. Examples include viscous flows, fracture mechanics and screened Coulombic interactions. Our MPI-based parallel implementation logically separates the computation and communication phases to avoid synchronization in the upward and downward computation passes, and thus allows us to fully exploit computation and communication overlapping. We measure isogranular and fixed-size scalability for a variety of kernels on the Pittsburgh Supercomputing Center's TCS-1 Alphaserver on up to 3000 processors. We have solved viscous flow problems with up to 2.1 billion unknowns and we have achieved 1.6 Tflops/s peak performance and 1.13 Tflops/s sustained performance.
A Note on Fuzzy Sets
Fuzzy independence and extended conditional probability In many applications, the use of Bayesian probability theory is problematical. Information needed to feasibility calculate is unavailable. There are different methodologies for dealing with this problem, e.g., maximal entropy and Dempster-Shafer Theory. If one can make independence assumptions, many of the problems disappear, and in fact, this is often the method of choice even when it is obviously incorrect. The notion of independence is a 0-1 concept, which implies that human guesses about its validity will not lead to robust systems. In this paper, we propose a fuzzy formulation of this concept. It should lend itself to probabilistic updating formulas by allowing heuristic estimation of the ''degree of independence.'' We show how this can be applied to compute a new notion of conditional probability (we call this ''extended conditional probability''). Given information, one typically has the choice of full conditioning (standard dependence) or ignoring the information (standard independence). We list some desiderata for the extension of this to allowing degree of conditioning. We then show how our formulation of degree of independence leads to a formula fulfilling these desiderata. After describing this formula, we show how this compares with other possible formulations of parameterized independence. In particular, we compare it to a linear interpolant, a higher power of a linear interpolant, and to a notion originally presented by Hummel and Manevitz [Tenth Int. Joint Conf. on Artificial Intelligence, 1987]. Interestingly, it turns out that a transformation of the Hummel-Manevitz method and our ''fuzzy'' method are close approximations of each other. Two examples illustrate how fuzzy independence and extended conditional probability might be applied. The first shows how linguistic probabilities result from treating fuzzy independence as a linguistic variable. The second is an industrial example of troubleshooting on the shop floor.
Pre-ATPG path selection for near optimal post-ATPG process space coverage Path delay testing is becoming increasingly important for high-performance chip testing in the presence of process variation. To guarantee full process space coverage, the ensemble of critical paths of all chips irrespective of their manufacturing process conditions needs to be tested, as different chips may have different critical paths. Existing coverage-based path selection techniques, however, suffer from the loss of coverage after ATPG (automatic test pattern generation), i.e., although the pre-ATPG path selection achieves good coverage, after ATPG, the coverage can be severely reduced as many paths turn out to be unsensitizable. This paper presents a novel path selection algorithm that, without running ATPG, selects a set of n paths to achieve near optimal post-ATPG coverage. Details of the algorithm and its optimality conditions are discussed. Experimental results show that, compared to the state-of-the-art, the proposed algorithm achieves not only superior post-ATPG coverage, but also significant runtime speedup.
1.05051
0.05204
0.05204
0.05
0.02602
0.01301
0.002278
0.000331
0.000028
0
0
0
0
0
MIMO Radar Using Compressive Sampling A multiple-input multiple-output (MIMO) radar system is proposed for obtaining angle and Doppler information on potential targets. Transmitters and receivers are nodes of a small scale wireless network and are assumed to be randomly scattered on a disk. The transmit nodes transmit uncorrelated waveforms. Each receive node applies compressive sampling to the received signal to obtain a small number of samples, which the node subsequently forwards to a fusion center. Assuming that the targets are sparsely located in the angle-Doppler space, based on the samples forwarded by the receive nodes the fusion center formulates an l 1 -optimization problem, the solution of which yields target angle and Doppler information. The proposed approach achieves the superior resolution of MIMO radar with far fewer samples than required by other approaches. This implies power savings during the communication phase between the receive nodes and the fusion center. Performance in the presence of a jammer is analyzed for the case of slowly moving targets. Issues related to forming the basis matrix that spans the angle-Doppler space, and for selecting a grid for that space are discussed. Extensive simulation results are provided to demonstrate the performance of the proposed approach at difference jammer and noise levels.
Reduced complexity angle-Doppler-range estimation for MIMO radar that employs compressive sensing The authors recently proposed a MIMO radar system that is implemented by a small wireless network. By applying compressive sensing (CS) at the receive nodes, the MIMO radar super-resolution can be achieved with far fewer observations than conventional approaches. This previous work considered the estimation of direction of arrival and Doppler. Since the targets are sparse in the angle-velocity space, target information can be extracted by solving an ¿1 minimization problem. In this paper, the range information is exploited by introducing step frequency to MIMO radar with CS. The proposed approach is able to achieve high range resolution and also improve the ambiguous velocity. However, joint angle-Doppler-range estimation requires discretization of the angle-Doppler-range space which causes a sharp rise in the computational burden of the ¿1 minimization problem. To maintain an acceptable complexity, a technique is proposed to successively estimate angle, Doppler and range in a decoupled fashion. The proposed approach can significantly reduce the complexity without sacrificing performance.
Performance analysis for sparse support recovery The performance of estimating the common support for jointly sparse signals based on their projections onto lower-dimensional space is analyzed. Support recovery is formulated as a multiple-hypothesis testing problem. Both upper and lower bounds on the probability of error are derived for general measurement matrices, by using the Chernoff bound and Fano's inequality, respectively. The upper bound shows that the performance is determined by a quantity measuring the measurement matrix incoherence, while the lower bound reveals the importance of the total measurement gain. The lower bound is applied to derive the minimal number of samples needed for accurate direction-of-arrival (DOA) estimation for a sparse representation based algorithm. When applied to Gaussian measurement ensembles, these bounds give necessary and sufficient conditions for a vanishing probability of error for majority realizations of the measurement matrix. Our results offer surprising insights into sparse signal recovery. For example, as far as support recovery is concerned, the well-known bound in Compressive Sensing with the Gaussian measurement matrix is generally not sufficient unless the noise level is low. Our study provides an alternative performance measure, one that is natural and important in practice, for signal recovery in Compressive Sensing and other application areas exploiting signal sparsity.
Compressive MUSIC: A Missing Link Between Compressive Sensing and Array Signal Processing The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems had been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sensing (CS) due to its capability to estimate sparse support even with an insufficient number of snapshots, in which case classical array signal processing fails. However, CS guarantees the accurate recovery in a probabilistic manner, which often shows inferior performance in the regime where the traditional array signal processing approaches succeed. The apparent dichotomy between the {\em probabilistic} CS and {\em deterministic} sensor array signal processing have not been fully understood. The main contribution of the present article is a unified approach that unveils a {missing link} between CS and array signal processing. The new algorithm, which we call {\em compressive MUSIC}, identifies the parts of support using CS, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using a large system MMV model, we show that our compressive MUSIC requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and can approach the optimal $l_0$-bound with finite number of snapshots.
High-Resolution Radar via Compressed Sensing A stylized compressed sensing radar is proposed in which the time-frequency plane is discretized into an N times N grid. Assuming the number of targets K is small (i.e., K Lt N2), then we can transmit a sufficiently ldquoincoherentrdquo pulse and employ the techniques of compressed sensing to reconstruct the target scene. A theoretical upper bound on the sparsity K is presented. Numerical simulations verify that even better performance can be achieved in practice. This novel-compressed sensing approach offers great potential for better resolution over classical radar.
Nonnegative sparse coding for discriminative semi-supervised learning An informative and discriminative graph plays an important role in the graph-based semi-supervised learning methods. This paper introduces a nonnegative sparse algorithm and its approximated algorithm based on the l0-l1 equivalence theory to compute the nonnegative sparse weights of a graph. Hence, the sparse probability graph (SPG) is termed for representing the proposed method. The nonnegative sparse weights in the graph naturally serve as clustering indicators, benefiting for semi-supervised learning. More important, our approximation algorithm speeds up the computation of the nonnegative sparse coding, which is still a bottle-neck for any previous attempts of sparse non-negative graph learning. And it is much more efficient than using l1-norm sparsity technique for learning large scale sparse graph. Finally, for discriminative semi-supervised learning, an adaptive label propagation algorithm is also proposed to iteratively predict the labels of data on the SPG. Promising experimental results show that the nonnegative sparse coding is efficient and effective for discriminative semi-supervised learning.
Sparse Sampling of Signal Innovations
Compressed sensing of analog signals in shift-invariant spaces A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worst-case scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that only part of the bandwidth is used. In this paper, we develop methods for low-rate sampling of continuous-time sparse signals in shift-invariant (SI) spaces, generated by m kernels with period T. We model sparsity by treating the case in which only k out of the m generators are active, however, we do not know which k are chosen. We show how to sample such signals at a rate much lower than m/T, which is the minimal sampling rate without exploiting sparsity. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite compressed sensing (CS) and then rely on algorithms developed in that context. The distingnishing feature of our results is that in contrast to standard CS, which treats finite-length vectors, we consider sampling of analog signals for which no underlying finite-dimensional model exists. The proposed framework allows to extend much of the recent literature on CS to the analog domain.
Block-sparse signals: uncertainty relations and efficient recovery We consider efficient methods for the recovery of block-sparse signals--i.e., sparse signals that have nonzero entries occurring in clusters--from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce. We then show that a block-version of the orthogonal matching pursuit algorithm recovers block k-sparse signals in no more than k steps if the block-coherence is sufficiently small. The same condition on block-coherence is shown to guarantee successful recovery through a mixed l2/l1-optimization approach. This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
A Definition of a Nonprobabilistic Entropy in the Setting of Fuzzy Sets Theory
Multidimensional scaling of fuzzy dissimilarity data Multidimensional scaling is a well-known technique for representing measurements of dissimilarity among objects as distances between points in a p-dimensional space. In this paper, this method is extended to the case where dissimilarities are expressed as intervals or fuzzy numbers. Each object is then no longer represented by a point but by a crisp or a fuzzy region. To determine these regions, two algorithms are proposed and illustrated using typical datasets. Experiments demonstrate the ability of the methods to represent both the structure and the vagueness of dissimilarity measurements.
Future Multimedia Networking, Second International Workshop, FMN 2009, Coimbra, Portugal, June 22-23, 2009. Proceedings
Spectral Methods for Parameterized Matrix Equations. We apply polynomial approximation methods-known in the numerical PDEs context as spectral methods-to approximate the vector-valued function that satisfies a linear system of equations where the matrix and the right-hand side depend on a parameter. We derive both an interpolatory pseudospectral method and a residual-minimizing Galerkin method, and we show how each can be interpreted as solving a truncated infinite system of equations; the difference between the two methods lies in where the truncation occurs. Using classical theory, we derive asymptotic error estimates related to the region of analyticity of the solution, and we present a practical residual error estimate. We verify the results with two numerical examples.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.04248
0.04
0.01776
0.013333
0.002714
0.0012
0.0006
0.000118
0
0
0
0
0
0
Interactive presentation: Statistical dual-Vdd assignment for FPGA interconnect power reduction Field programmable dual- Vdd interconnects are effective to reduce FPGA power. However, the deterministic Vdd assignment leverages timing slack exhaustively and significantly increases the number of near-critical paths, which results in a degraded timing yield with process variation. In this paper, we present two statistical Vdd assignment algorithms. The first greedy algorithm is based on sensitivity while the second one is based on timing slack budgeting. Both minimize chip-level interconnect power without degrading timing yield. Evaluated with MCNC circuits, the statistical algorithms reduce interconnect power by 40% compared to the single- Vdd FPGA with power gating. In contrast, the deterministic algorithm reduces interconnect power by 51% but degrades timing yield from 97.7% to 87.5%.
Non-Gaussian statistical timing analysis using second-order polynomial fitting In the nanometer manufacturing region, process variation causes significant uncertainty for circuit performance verifi- cation. Statistical static timing analysis (SSTA) is thus de- veloped to estimate timing distribution under process vari- ation. However, most of the existing SSTA techniques have difficulty in handling the non-Gaussian variation distribu- tion and non-linear dependency of delay on variation sources. To solve such a problem, in this paper, we first propose a new method to approximate the max operation of two non- Gaussian random variables through second-order polyno- mial fitting. We then present new non-Gaussian SSTA algo- rithms under two types of variational delay models: quadratic model and semi-quadratic model (i.e., quadratic model with- out crossing terms). All atomic operations (such as max and sum) of our algorithms are performed by closed-form formu- las, hence they scale well for large designs. Experimental results show that compared to the Monte-Carlo simulation, our approach predicts the mean, standard deviation, and skewness within 1%, 1%, and 5% error, respectively. Our approach is more accurate and also 20x faster than the most recent method for non-Gaussian and nonlinear SSTA.
Fast variational interconnect delay and slew computation using quadratic models Interconnects constitute a dominant source of circuit delay for modern chip designs. The variations of critical dimensions in modern VLSI technologies lead to variability in interconnect performance that must be fully accounted for in timing verification. However, handling a multitude of inter-die/intra-die variations and assessing their impacts on circuit performance can dramatically complicate the timing analysis. In this paper, a practical interconnect delay and slew analysis technique is presented to facilitate efficient evaluation of wire performance variability. By harnessing a collection of computationally efficient procedures and closed-form formulas, process variations are directly mapped into the variability of the output delay and slew. An efficient method based on sensitivity analysis is implemented to calculate driving point models under variations for gate-level timing analysis. The proposed adjoint technique not only provides statistical performance variations of the interconnect network under analysis, but also produces delay and slew expressions parameterized in the underlying process variations in a quadratic parametric form. As such, it can be harnessed to enable statistical timing analysis while considering important statistical correlations. Our experimental results have indicated that the presented analysis is accurate regardless of location of sink nodes and it is also robust over a wide range of process variations.
An accurate sparse matrix based framework for statistical static timing analysis Statistical Static Timing Analysis has received wide attention recently and emerged as a viable technique for manufacturability analysis. To be useful, however, it is important that the error introduced in SSTA be significantly smaller than the manufacturing variations being modeled. Achieving such accuracy requires careful attention to the delay models and to the algorithms applied. In this paper, we propose a new sparse-matrix based framework for accurate path-based SSTA, motivated by the observation that the number of timing paths in practice is sub-quadratic based on a study of industrial circuits and the ISCAS89 benchmarks. Our sparse-matrix based formulation has the following advantages: (a) It places no restrictions on process parameter distributions; (b) It embeds accurate polynomial-based delay model which takes into account slope propagation naturally; (c) It takes advantage of the matrix sparsity and high performance linear algebra for efficient implementation. Our experimental results are very promising.
Statistical timing analysis with correlated non-Gaussian parameters using independent component analysis We propose a scalable and efficient parameterized block-based statistical static timing analysis algorithm incorporating both Gaussian and non-Gaussian parameter distributions, capturing spatial correlations using a grid-based model. As a preprocessing step, we employ independent component analysis to transform the set of correlated non-Gaussian parameters to a basis set of parameters that are statistically independent, and principal components analysis to orthogonalize the Gaussian parameters. The procedure requires minimal input information: given the moments of the variational parameters, we use a Pade approximation-based moment matching scheme to generate the distributions of the random variables representing the signal arrival times, and preserve correlation information by propagating arrival times in a canonical form. For the ISCAS89 benchmark circuits, as compared to Monte Carlo simulations, we obtain average errors of 0.99% and 2.05%, respectively, in the mean and standard deviation of the circuit delay. For a circuit with |G| gates and a layout with g spatial correlation grids, the complexity of our approach is O(g|G|)
VGTA: Variation Aware Gate Timing Analysis As technology scales down, timing verification of digital integrated circuits becomes an extremely difficult task due to gate and wire variability. Therefore, statistical timing analysis is inevitable. Most timing tools divide the analysis into two parts: 1) interconnect (wire) timing analysis and 2) gate timing analysis. Variational interconnect delay calculation for blockbased TA has been recently studied. However, variational gate delay calculation has remained unexplored. In this paper, we propose a new framework to handle the variation-aware gate timing analysis in block-based TA. First, we present an approach to approximate variational RC- load by using a canonical first-order model. Next, an efficient variation-aware effective capacitance calculation based on statistical input transition, statistical gate timing library, and statistical RC- load is presented. In this step, we use a single-iteration Ceff calculation which is efficient and reasonably accurate. Finally we calculate the statistical gate delay and output slew based on the aforementioned model. Experimental results show an average error of 7% for gate delay and output slew with respect to the HSPICE Monte Carlo simulation while the runtime is about 145 times faster.
Correlation-aware statistical timing analysis with non-Gaussian delay distributions Process variations have a growing impact on circuit performance for today's integrated circuit (IC) technologies. The non-Gaussian delay distributions as well as the correlations among delays make statistical timing analysis more challenging than ever. In this paper, the authors presented an efficient block-based statistical timing analysis approach with linear complexity with respect to the circuit size, which can accurately predict non-Gaussian delay distributions from realistic nonlinear gate and interconnect delay models. This approach accounts for all correlations, from manufacturing process dependence, to re-convergent circuit paths to produce more accurate statistical timing predictions. With this approach, circuit designers can have increased confidence in the variation estimates, at a low additional computation cost.
Fast statistical timing analysis by probabilistic event propagation We propose a new statistical timing analysis algorithm, which produces arrival-time random variables for all internal signals and primary outputs for cell-based designs with all cell delays modeled as random variables. Our algorithm propagates probabilistic timing events through the circuit and obtains final probabilistic events (distributions) at all nodes. The new algorithm is deterministic and flexible in controlling run time and accuracy. However, the algorithm has exponential time complexity for circuits with reconvergent fanouts. In order to solve this problem, we further propose a fast approximate algorithm. Experiments show that this approximate algorithm speeds up the statistical timing analysis by at least an order of magnitude and produces results with small errors when compared with Monte Carlo methods.
The algebra of fuzzy truth values The purpose of this paper is to give a straightforward mathematical treatment of algebras of fuzzy truth values for type-2 fuzzy sets.
Data compression and harmonic analysis In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon's R(D) theory in the case of Gaussian stationary processes, which says that transforming into a Fourier basis followed by block coding gives an optimal lossy compression technique; practical developments like transform-based image compression have been inspired by this result. In this paper we also discuss connections perhaps less familiar to the information theory community, growing out of the field of harmonic analysis. Recent harmonic analysis constructions, such as wavelet transforms and Gabor transforms, are essentially optimal transforms for transform coding in certain settings. Some of these transforms are under consideration for future compression standards. We discuss some of the lessons of harmonic analysis in this century. Typically, the problems and achievements of this field have involved goals that were not obviously related to practical data compression, and have used a language not immediately accessible to outsiders. Nevertheless, through an extensive generalization of what Shannon called the “sampling theorem”, harmonic analysis has succeeded in developing new forms of functional representation which turn out to have significant data compression interpretations. We explain why harmonic analysis has interacted with data compression, and we describe some interesting recent ideas in the field that may affect data compression in the future
Fuzzy-Spatial SQL Current Geographic Information Systems (GISs) are inadequate for performing spatial analysis, since they force users to formulate their often vague requests by means of crisp selection conditions on spatial data. In fact, SQL extended to support spatial analysis is becoming the de facto standard for GISs; however, it does not allow the formulation of flexible queries. Based on these considerations, we propose the extension of SQL/Spatial in order to make it flexible. Flexibility is obtained by allowing the expression of linguistic predicates defining soft spatial and non-spatial selection conditions admitting degrees of satisfaction. Specifically, this paper proposes an extension of the basic SQL SELECT operator; proposes the definition of some spatial functions to compute gradual topological, distance, and directional properties of spatial objects; introduces a new operator for defining linguistic predicates over spatial properties, and reports the related formal semantics.
Maximum Entropy Multivariate Analysis of Uncertain Dynamical Systems Based on the Wiener-Askey Polynomial Chaos Many measurement models are formalized in terms of a stochastic ordinary differential equation that relates its solution to some given observables. The expression of the measurement uncertainty for the solution that is evaluated at some time instants requires the determination of its (joint) probability density function. Recently, the polynomial chaos theory (PCT) has been widely recognized as a p...
The relationship between similarity measure and entropy of intuitionistic fuzzy sets In this paper, we introduce an axiomatic definition of the similarity measure of intuitionistic fuzzy sets (IFS) that differs from the definition of Li [15]. The relationship between the similarity measure and the entropy of IFS is investigated in detail. Six theorems on how the similarity measure could be transformed into the entropy for IFS and vice versa are proposed based on their axiomatic definitions. Some formulas have been proposed to calculate the similarity measure and the entropy of IFS. Finally, sufficient conditions to transform the similarity measures to the entropy for IFS and vice versa are given.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.222
0.025111
0.022
0.0148
0.005442
0.001
0.000368
0.000048
0
0
0
0
0
0
An Anisotropic Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data This work proposes and analyzes an anisotropic sparse grid stochastic collocation method for solving partial differential equations with random coefficients and forcing terms (input data of the model). The method consists of a Galerkin approximation in the space variables and a collocation, in probability space, on sparse tensor product grids utilizing either Clenshaw-Curtis or Gaussian knots. Even in the presence of nonlinearities, the collocation approach leads to the solution of uncoupled deterministic problems, just as in the Monte Carlo method. This work includes a priori and a posteriori procedures to adapt the anisotropy of the sparse grids to each given problem. These procedures seem to be very effective for the problems under study. The proposed method combines the advantages of isotropic sparse collocation with those of anisotropic full tensor product collocation: the first approach is effective for problems depending on random variables which weigh approximately equally in the solution, while the benefits of the latter approach become apparent when solving highly anisotropic problems depending on a relatively small number of random variables, as in the case where input random variables are Karhunen-Loève truncations of “smooth” random fields. This work also provides a rigorous convergence analysis of the fully discrete problem and demonstrates (sub)exponential convergence in the asymptotic regime and algebraic convergence in the preasymptotic regime, with respect to the total number of collocation points. It also shows that the anisotropic approximation breaks the curse of dimensionality for a wide set of problems. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo. In particular, for moderately large-dimensional problems, the sparse grid approach with a properly chosen anisotropy seems to be very efficient and superior to all examined methods.
Principal manifold learning by sparse grids In this paper, we deal with the construction of lower-dimensional manifolds from high-dimensional data which is an important task in data mining, machine learning and statistics. Here, we consider principal manifolds as the minimum of a regularized, non-linear empirical quantization error functional. For the discretization we use a sparse grid method in latent parameter space. This approach avoids, to some extent, the curse of dimension of conventional grids like in the GTM approach. The arising non-linear problem is solved by a descent method which resembles the expectation maximization algorithm. We present our sparse grid principal manifold approach, discuss its properties and report on the results of numerical experiments for one-, two- and three-dimensional model problems.
Data mining with sparse grids We present a new approach to the classification problem arising in data mining.It is based on the regularization network approach but, in contrast to the othermethods which employ ansatz functions associated to data points, we use a grid inthe usually high-dimensional feature space for the minimization process. To copewith the curse of dimensionality, we employ sparse grids. Thus, only O(h\Gamma1nnd\Gamma1)instead of O(h\Gammadn ) grid points and unknowns are involved....
Polynomial Chaos Expansion of Random Coefficients and the Solution of Stochastic Partial Differential Equations in the Tensor Train Format We apply the tensor train (TT) decomposition to construct the tensor product polynomial chaos expansion (PCE) of a random field, to solve the stochastic elliptic diffusion PDE with the stochastic Galerkin discretization, and to compute some quantities of interest (mean, variance, and exceedance probabilities). We assume that the random diffusion coefficient is given as a smooth transformation of a Gaussian random field. In this case, the PCE is delivered by a complicated formula, which lacks an analytic TT representation. To construct its TT approximation numerically, we develop the new block TT cross algorithm, a method that computes the whole TT decomposition from a few evaluations of the PCE formula. The new method is conceptually similar to the adaptive cross approximation in the TT format but is more efficient when several tensors must be stored in the same TT representation, which is the case for the PCE. In addition, we demonstrate how to assemble the stochastic Galerkin matrix and to compute the solution of the elliptic equation and its postprocessing, staying in the TT format. We compare our technique with the traditional sparse polynomial chaos and the Monte Carlo approaches. In the tensor product polynomial chaos, the polynomial degree is bounded for each random variable independently. This provides higher accuracy than the sparse polynomial set or the Monte Carlo method, but the cardinality of the tensor product set grows exponentially with the number of random variables. However, when the PCE coefficients are implicitly approximated in the TT format, the computations with the full tensor product polynomial set become possible. In the numerical experiments, we confirm that the new methodology is competitive in a wide range of parameters, especially where high accuracy and high polynomial degrees are required.
Multiscale Stochastic Preconditioners in Non-intrusive Spectral Projection A preconditioning approach is developed that enables efficient polynomial chaos (PC) representations of uncertain dynamical systems. The approach is based on the definition of an appropriate multiscale stretching of the individual components of the dynamical system which, in particular, enables robust recovery of the unscaled transient dynamics. Efficient PC representations of the stochastic dynamics are then obtained through non-intrusive spectral projections of the stretched measures. Implementation of the present approach is illustrated through application to a chemical system with large uncertainties in the reaction rate constants. Computational experiments show that, despite the large stochastic variability of the stochastic solution, the resulting dynamics can be efficiently represented using sparse low-order PC expansions of the stochastic multiscale preconditioner and of stretched variables. The present experiences are finally used to motivate several strategies that promise to yield further advantages in spectral representations of stochastic dynamics.
A Sparse Composite Collocation Finite Element Method for Elliptic SPDEs. This work presents a stochastic collocation method for solving elliptic PDEs with random coefficients and forcing term which are assumed to depend on a finite number of random variables. The method consists of a hierarchic wavelet discretization in space and a sequence of hierarchic collocation operators in the probability domain to approximate the solution's statistics. The selection of collocation points is based on a Smolyak construction of zeros of orthogonal polynomials with respect to the probability density function of each random input variable. A sparse composition of levels of spatial refinements and stochastic collocation points is then proposed and analyzed, resulting in a substantial reduction of overall degrees of freedom. Like in the Monte Carlo approach, the algorithm results in solving a number of uncoupled, purely deterministic elliptic problems, which allows the integration of existing fast solvers for elliptic PDEs. Numerical examples on two-dimensional domains will then demonstrate the superiority of this sparse composite collocation finite element method compared to the “full composite” collocation finite element method and the Monte Carlo method.
A Weighted Reduced Basis Method for Elliptic Partial Differential Equations with Random Input Data. In this work we propose and analyze a weighted reduced basis method to solve elliptic partial differential equations (PDEs) with random input data. The PDEs are first transformed into a weighted parametric elliptic problem depending on a finite number of parameters. Distinctive importance of the solution at different values of the parameters is taken into account by assigning different weights to the samples in the greedy sampling procedure. A priori convergence analysis is carried out by constructive approximation of the exact solution with respect to the weighted parameters. Numerical examples are provided for the assessment of the advantages of the proposed method over the reduced basis method and the stochastic collocation method in both univariate and multivariate stochastic problems.
Karhunen-Loève approximation of random fields by generalized fast multipole methods KL approximation of a possibly instationary random field a(ω, x) ∈ L2(Ω,dP; L∞(D)) subject to prescribed meanfield Ea(x) = ∫Ω, a (ω x) dP(ω) and covariance Va(x,x') = ∫Ω(a(ω, x) - Ea(x))(a(ω, x') - Ea(x')) dP(ω) in a polyhedral domain D ⊂ Rd is analyzed. We show how for stationary covariances Va(x,x') = ga(|x - x'|) with ga(z) analytic outside of z = 0, an M-term approximate KL-expansion aM(ω, x) of a(ω, x) can be computed in log-linear complexity. The approach applies in arbitrary domains D and for nonseparable covariances Ca. It involves Galerkin approximation of the KL eigenvalue problem by discontinuous finite elements of degree p ≥ 0 on a quasiuniform, possibly unstructured mesh of width h in D, plus a generalized fast multipole accelerated Krylov-Eigensolver. The approximate KL-expansion aM(X, ω) of a(x, ω) has accuracy O(exp(-bM1/d)) if ga is analytic at z = 0 and accuracy O(M-k/d) if ga is Ck at zero. It is obtained in O(MN(logN)b) operations where N = O(h-d).
Efficient Iterative Solvers for Stochastic Galerkin Discretizations of Log-Transformed Random Diffusion Problems We consider the numerical solution of a steady-state diffusion problem where the diffusion coefficient is the exponent of a random field. The standard stochastic Galerkin formulation of this problem is computationally demanding because of the nonlinear structure of the uncertain component of it. We consider a reformulated version of this problem as a stochastic convection-diffusion problem with random convective velocity that depends linearly on a fixed number of independent truncated Gaussian random variables. The associated Galerkin matrix is nonsymmetric but sparse and allows for fast matrix-vector multiplications with optimal complexity. We construct and analyze two block-diagonal preconditioners for this Galerkin matrix for use with Krylov subspace methods such as the generalized minimal residual method. We test the efficiency of the proposed preconditioning approaches and compare the iterative solver performance for a model problem posed in both diffusion and convection-diffusion formulations.
Inversion of Robin coefficient by a spectral stochastic finite element approach This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.
Using sparse polynomial chaos expansions for the global sensitivity analysis of groundwater lifetime expectancy in a multi-layered hydrogeological model. The study makes use of polynomial chaos expansions to compute Sobol׳ indices within the frame of a global sensitivity analysis of hydro-dispersive parameters in a simplified vertical cross-section of a segment of the subsurface of the Paris Basin. Applying conservative ranges, the uncertainty in 78 input variables is propagated upon the mean lifetime expectancy of water molecules departing from a specific location within a highly confining layer situated in the middle of the model domain. Lifetime expectancy is a hydrogeological performance measure pertinent to safety analysis with respect to subsurface contaminants, such as radionuclides. The sensitivity analysis indicates that the variability in the mean lifetime expectancy can be sufficiently explained by the uncertainty in the petrofacies, i.e. the sets of porosity and hydraulic conductivity, of only a few layers of the model. The obtained results provide guidance regarding the uncertainty modeling in future investigations employing detailed numerical models of the subsurface of the Paris Basin. Moreover, the study demonstrates the high efficiency of sparse polynomial chaos expansions in computing Sobol׳ indices for high-dimensional models.
A fuzzy approach to select the location of the distribution center The location selection of distribution center (DC) is one of the most important decision issues for logistics managers. Owing to vague concept frequently represented in decision data, a new multiple criteria decision-making method is proposed to solve the distribution center location selection problem under fuzzy environment. In the proposed method, the ratings of each alternative and the weight of each criterion are described by linguistic variables which can be expressed in triangular fuzzy numbers. The final evaluation value of each DC location is also expressed in a triangular fuzzy number. By calculating the difference of final evaluation value between each pair of DC locations, a fuzzy preference relation matrix is constructed to represent the intensity of the preferences of one plant location over another. And then, a stepwise ranking procedure is proposed to determine the ranking order of all candidate locations. Finally, a numerical example is solved to illustrate the procedure of the proposed method at the end of this paper.
Mixed linear system estimation and identification We consider a mixed linear system model, with both continuous and discrete inputs and outputs, described by a coefficient matrix and a set of noise variances. When the discrete inputs and outputs are absent, the model reduces to the usual noise-corrupted linear system. With discrete inputs only, the model has been used in fault estimation, and with discrete outputs only, the system reduces to a probit model. We consider two fundamental problems: estimating the model input, given the model parameters and the model output; and identifying the model parameters, given a training set of input–output pairs. The estimation problem leads to a mixed Boolean-convex optimization problem, which can be solved exactly when the number of discrete variables is small enough. In other cases, when the number of discrete variables is large, the estimation problem can be solved approximately, by solving a convex relaxation, rounding, and possibly, carrying out a local optimization step. The identification problem is convex and so can be exactly solved. Adding ℓ 1 regularization to the identification problem allows us to trade off model fit and model parsimony. We illustrate the identification and estimation methods with a numerical example.
Gauss–Legendre and Chebyshev quadratures for singular integrals Exact expressions are presented for efficient computation of the weights in Gauss–Legendre and Chebyshev quadratures for selected singular integrands. The singularities may be of Cauchy type, logarithmic type or algebraic-logarithmic end-point branching points. We provide Fortran 90 routines for computing the weights for both the Gauss–Legendre and the Chebyshev (Fejér-1) meshes whose size can be set by the user.
1.002204
0.004109
0.002743
0.002341
0.002168
0.00197
0.001676
0.00106
0.000695
0.000164
0.000005
0
0
0
Compressive sensing of a superposition of pulses Compressive Sensing (CS) has emerged as a potentially viable tech- nique for the efficient acquisition of high-resolution sign als and im- ages that have a sparse representation in a fixed basis. The nu mber of linear measurements M required for robust polynomial time recov- ery of S-sparse signals of length N can be shown to be proportional to S log N. However, in many real-life imaging applications, the original S-sparse image may be blurred by an unknown point spread function defined over a domain ; this multiplies the apparent spar- sity of the image, as well as the corresponding acquisition cost, by a factor of ||. In this paper, we propose a new CS recovery algorithm for such images that can be modeled as a sparse superposition of pulses. Our method can be used to infer both the shape of the two- dimensional pulse and the locations and amplitudes of the pulses. Our main theoretical result shows that our reconstruction method re- quires merely M = O(S + ||) linear measurements, so that M is sublinear in the overall image sparsity S||. Experiments with real world data demonstrate that our method provides considerable gains over standard state-of-the-art compressive sensing techniques in terms of numbers of measurements required for stable recovery.
Channel Protection: Random Coding Meets Sparse Channels Multipath interference is an ubiquitous phenomenon in modern communication systems. The conventional way to compensate for this effect is to equalize the channel by estimating its impulse response by transmitting a set of training symbols. The primary drawback to this type of approach is that it can be unreliable if the channel is changing rapidly. In this paper, we show that randomly encoding the signal can protect it against channel uncertainty when the channel is sparse. Before transmission, the signal is mapped into a slightly longer codeword using a random matrix. From the received signal, we are able to simultaneously estimate the channel and recover the transmitted signal. We discuss two schemes for the recovery. Both of them exploit the sparsity of the underlying channel. We show that if the channel impulse response is sufficiently sparse, the transmitted signal can be recovered reliably.
Compressive sampling of pulse trains: Spread the spectrum! In this paper we consider the problem of sampling far below the Nyquist rate signals that are sparse linear superpositions of shifts of a known, potentially wide-band, pulse. This signal model is key for applications such as Ultra Wide Band (UWB) communications or neural signal processing. Following the recently proposed Compressed Sensing methodology, we study several acquisition strategies and show that the approximations recovered via lscr1 minimization are greatly enhanced if one uses Spread Spectrum modulation prior to applying random Fourier measurements. We complement our experiments with a discussion of possible hardware implementation of our technique.
Sampling Signals from a Union of Subspaces The single linear vector space assumption is widely used in modeling the signal classes, mainly due to its simplicity and mathematical tractability. In certain signals, a union of subspaces can be a more appropriate model. This paper provides a new perspective for signal sampling by considering signals from a union of subspaces instead of a single space.
Iterative Hard Thresholding for Compressed Sensing Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper)•It gives near-optimal error guarantees.•It is robust to observation noise.•It succeeds with a minimum number of observations.•It can be used with any sampling operator for which the operator and its adjoint can be computed.•The memory requirement is linear in the problem size.•Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint.•It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal.•Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
Extensions of compressed sensing We study the notion of compressed sensing (CS) as put forward by Donoho, Candes, Tao and others. The notion proposes a signal or image, unknown but supposed to be compressible by a known transform, (e.g. wavelet or Fourier), can be subjected to fewer measurements than the nominal number of data points, and yet be accurately reconstructed. The samples are nonadaptive and measure 'random' linear combinations of the transform coefficients. Approximate reconstruction is obtained by solving for the transform coefficients consistent with measured data and having the smallest possible l1 norm.We present initial 'proof-of-concept' examples in the favorable case where the vast majority of the transform coefficients are zero. We continue with a series of numerical experiments, for the setting of lp-sparsity, in which the object has all coefficients nonzero, but the coefficients obey an lp bound, for some p ∈ (0, 1]. The reconstruction errors obey the inequalities paralleling the theory, seemingly with well-behaved constants.We report that several workable families of 'random' linear combinations all behave equivalently, including random spherical, random signs, partial Fourier and partial Hadamard.We next consider how these ideas can be used to model problems in spectroscopy and image processing, and in synthetic examples see that the reconstructions from CS are often visually "noisy". To suppress this noise we postprocess using translation-invariant denoising, and find the visual appearance considerably improved.We also consider a multiscale deployment of compressed sensing, in which various scales are segregated and CS applied separately to each; this gives much better quality reconstructions than a literal deployment of the CS methodology.These results show that, when appropriately deployed in a favorable setting, the CS framework is able to save significantly over traditional sampling, and there are many useful extensions of the basic idea.
A lower estimate for entropy numbers The behaviour of the entropy numbers ek(id:lnp→lnq), 0<p<q⩽∞, is well known (up to multiplicative constants independent of n and k), except in the quasi-Banach case 0<p<1 for “medium size” k, i.e., when logn⩽k⩽n, where only an upper estimate is available so far. We close this gap by proving the lower estimate ek(id:lnp→lnq)⩾c(log(n/k+1)/k)1/p−1/q for all 0<p<q⩽∞ and logn⩽k⩽n, with some constant c>0 depending only on p.
Some Defects in Finite-Difference Edge Finders This work illustrates and explains various artifacts in the output of five finite difference edge finders, those of J.F. Canny (1983, 1986), R.A. Boie et al. (1986) and R.A. Boie and I.J. Cox (1987), and three variations on that of D. Marr and E.C. Hildreth (1980), reimplemented with a common output format and method of noise suppression. These artifacts include gaps in boundaries, spurious boundaries, and deformation of region shape.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
A framework for understanding human factors in web-based electronic commerce The World Wide Web and email are used increasingly for purchasing and selling products. The use of the internet for these functions represents a significant departure from the standard range of information retrieval and communication tasks for which it has most often been used. Electronic commerce should not be assumed to be information retrieval, it is a separate task-domain, and the software systems that support it should be designed from the perspective of its goals and constraints. At present there are many different approaches to the problem of how to support seller and buyer goals using the internet. They range from standard, hierarchically arranged, hyperlink pages to “electronic sales assistants”, and from text-based pages to 3D virtual environments. In this paper, we briefly introduce the electronic commerce task from the perspective of the buyer, and then review and analyse the technologies. A framework is then proposed to describe the design dimensions of electronic commerce. We illustrate how this framework may be used to generate additional, hypothetical technologies that may be worth further exploration.
Fast image recovery using variable splitting and constrained optimization We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.
QoE Aware Service Delivery in Distributed Environment Service delivery and customer satisfaction are strongly related items for a correct commercial management platform. Technical aspects targeting this issue relate to QoS parameters that can be handled by the platform, at least partially. Subjective psychological issues and human cognitive aspects are typically unconsidered aspects and they directly determine the Quality of Experience (QoE). These factors finally have to be considered as key input for a successful business operation between a customer and a company. In our work, a multi-disciplinary approach is taken to propose a QoE interaction model based on the theoretical results from various fields including pyschology, cognitive sciences, sociology, service ecosystem and information technology. In this paper a QoE evaluator is described for assessing the service delivery in a distributed and integrated environment on per user and per service basis.
Application of FMCDM model to selecting the hub location in the marine transportation: A case study in southeastern Asia Hub location selection problems have become one of the most popular and important issues not only in the truck transportation and the air transportation, but also in the marine transportation. The main focus of this paper is on container transshipment hub locations in southeastern Asia. Transshipment is the fastest growing segment of the containerport market, resulting in significant scope to develop new transshipment terminal capacity to cater for future expected traffic flows. A shipping carrier not only calculates transport distances and operation costs, but also evaluates some qualitative conditions for existing hub locations and then selects an optimal container transshipment hub location in the region. In this paper, a fuzzy multiple criteria decision-making (FMCDM) model is proposed for evaluating and selecting the container transshipment hub port. Finally, the utilization of the proposed FMCDM model is demonstrated with a case study of hub locations in southeastern Asia. The results show that the FMCDM model proposed in this paper can be used to explain the evaluation and decision-making procedures of hub location selection well. In addition, the preferences are calculated for existing hub locations and these are then compared with a new proposed container transshipment hub location in the region, in this instance the Port of Shanghai. Furthermore, a sensitivity analysis is performed.
Designing type-2 fuzzy logic system controllers via fuzzy Lyapunov synthesis for the output regulator of a servomechanism with nonlinear backlash Fuzzy Lyapunov Synthesis is extended to the design of Type-2 Fuzzy Logic System Controllers for the output regulation problem for a servomechanism with nonlinear backlash. The problem in question is to design a feedback controller so as to obtain the closed-loop system in which all trajectories are bounded and the load of the driver is regulated to a desired position while also attenuating the influence of external disturbances. The servomotor position is the only measurement available for feedback; the proposed extension is far from trivial because of nonminimum phase properties of the system. Performance issues of the Type-2 Fuzzy Logic Regulator constructed are illustrated in a simulation study.
1.24
0.24
0.24
0.053333
0.003244
0.000235
0.000005
0
0
0
0
0
0
0
Hierarchical Modeling, Optimization, and Synthesis for System-Level Analog and RF Designs The paper describes the recent state of the art in hierarchical analog synthesis, with a strong emphasis on associated techniques for computer-aided model generation and optimization. Over the past decade, analog design automation has progressed to the point where there are industrially useful and commercially available tools at the cell level-tools for analog components with 10-100 devices. Automated techniques for device sizing, for layout, and for basic statistical centering have been successfully deployed. However, successful component-level tools do not scale trivially to system-level applications. While a typical analog circuit may require only 100 devices, a typical system such as a phase-locked loop, data converter, or RF front-end might assemble a few hundred such circuits, and comprise 10 000 devices or more. And unlike purely digital systems, mixed-signal designs typically need to optimize dozens of competing continuous-valued performance specifications, which depend on the circuit designer's abilities to successfully exploit a range of nonlinear behaviors across levels of abstraction from devices to circuits to systems. For purposes of synthesis or verification, these designs are not tractable when considered "flat." These designs must be approached with hierarchical tools that deal with the system's intrinsic design hierarchy. This paper surveys recent advances in analog design tools that specifically deal with the hierarchical nature of practical analog and RF systems. We begin with a detailed survey of algorithmic techniques for automatically extracting a suitable nonlinear macromodel from a device-level circuit. Such techniques are critical to both verification and synthesis activities for complex systems. We then survey recent ideas in hierarchical synthesis for analog systems and focus in particular on numerical techniques for handling the large number of degrees of freedom in these designs and for exploring the space of performance tradeoffs ear- - ly in the design process. Finally, we briefly touch on recent ideas for accommodating models of statistical manufacturing variations in these tools and flows
Stable Reduced Models for Nonlinear Descriptor Systems Through Piecewise-Linear Approximation and Projection This paper presents theoretical and practical results concerning the stability of piecewise-linear (PWL) reduced models for the purposes of analog macromodeling. Results include proofs of input-output (I/O) stability for PWL approximations to certain classes of nonlinear descriptor systems, along with projection techniques that are guaranteed to preserve I/O stability in reduced-order PWL models. We also derive a new PWL formulation and introduce a new nonlinear projection, allowing us to extend our stability results to a broader class of nonlinear systems described by models containing nonlinear descriptor functions. Lastly, we present algorithms to compute efficiently the required stabilizing nonlinear left-projection matrix operators.
Principle hessian direction based parameter reduction for interconnect networks with process variation As CMOS technology enters the nanometer regime, the increasing process variation is bringing manifest impact on circuit performance. To accurately take account of both global and local process variations, a large number of random variables (or parameters) have to be incorporated into circuit models. This measure in turn raises the complexity of the circuit models. The current paper proposes a Principle Hessian Direction (PHD) based parameter reduction approach for interconnect networks. The proposed approach relies on each parameter's impact on circuit performance to decide whether keeping or reducing the parameter. Compared with existing principle component analysis(PCA) method, this performance based property provides us a significantly smaller parameter set after reduction. The experimental results also support our conclusions. In interconnect cases, the proposed method reduces 70% of parameters. In some cases (the mesh example in the current paper), the new approach leads to an 85% reduction. We also tested ISCAS benchmarks. In all cases, an average of 53% of reductionis observed with less than 3% error in mean and less than 8% error in variation.
Co-Learning Bayesian Model Fusion: Efficient Performance Modeling of Analog and Mixed-Signal Circuits Using Side Information Efficient performance modeling of today's analog and mixed-signal (AMS) circuits is an important yet challenging task. In this paper, we propose a novel performance modeling algorithm that is referred to as Co-Learning Bayesian Model Fusion (CL-BMF). The key idea of CL-BMF is to take advantage of the additional information collected from simulation and/or measurement to reduce the performance modeling cost. Different from the traditional performance modeling approaches which focus on the prior information of model coefficients (i.e. the coefficient side information) only, CL-BMF takes advantage of another new form of prior knowledge: the performance side information. In particular, CL-BMF combines the coefficient side information, the performance side information and a small number of training samples through Bayesian inference based on a graphical model. Two circuit examples designed in a commercial 32nm SOI CMOS process demonstrate that CL-BMF achieves up to 5X speed-up over other state-of-the-art performance modeling techniques without surrendering any accuracy.
General-Purpose Nonlinear Model-Order Reduction Using Piecewise-Polynomial Representations We present algorithms for automated macromodeling of nonlinear mixed-signal system blocks. A key feature of our methods is that they automate the generation of general-purpose macromodels that are suitable for a wide range of time- and frequency-domain analyses important in mixed-signal design flows. In our approach, a nonlinear circuit or system is approximated using piecewise-polynomial (PWP) representations. Each polynomial system is reduced to a smaller one via weakly nonlinear polynomial model-reduction methods. Our approach, dubbed PWP, generalizes recent trajectory-based piecewise-linear approaches and ties them with polynomial-based model-order reduction, which inherently captures stronger nonlinearities within each region. PWP-generated macromodels not only reproduce small-signal distortion and intermodulation properties well but also retain fidelity in large-signal transient analyses. The reduced models can be used as drop-in replacements for large subsystems to achieve fast system-level simulation using a variety of time- and frequency-domain analyses (such as dc, ac, transient, harmonic balance, etc.). For the polynomial reduction step within PWP, we also present a novel technique [dubbed multiple pseudoinput (MPI)] that combines concepts from proper orthogonal decomposition with Krylov-subspace projection. We illustrate the use of PWP and MPI with several examples (including op-amps and I/O buffers) and provide important implementation details. Our experiments indicate that it is easy to obtain speedups of about an order of magnitude with push-button nonlinear macromodel-generation algorithms.
Fast, non-Monte-Carlo estimation of transient performance variation due to device mismatch This paper describes an efficient way of simulating the effects of device random mismatch on circuit transient characteristics, such as variations in delay or in frequency. The proposed method models DC random offsets as equivalent AC pseudonoises and leverages the fast, linear periodically time-varying (LPTV) noise analysis available from RF circuit simulators. Therefore, the method can be considered as an extension to DCMATCH analysis and offers a large speed-up compared to the traditional Monte Carlo analysis. Although the assumed linear perturbation model is valid only for small variations, it enables easy ways to estimate correlations among variations and identify the most sensitive design parameters to mismatch, all at no additional simulation cost. Three benchmarks measuring the variations in the input offset voltage of a clocked comparator, the delay of a logic path, and the frequency of an oscillator demonstrate the speed improvement of about 100-1000 × compared to a 1000-point Monte Carlo method.
A big-data approach to handle process variations: Uncertainty quantification by tensor recovery Stochastic spectral methods have become a popular technique to quantify the uncertainties of nano-scale devices and circuits. They are much more efficient than Monte Carlo for certain design cases with a small number of random parameters. However, their computational cost significantly increases as the number of random parameters increases. This paper presents a big-data approach to solve high-dimensional uncertainty quantification problems. Specifically, we simulate integrated circuits and MEMS at only a small number of quadrature samples; then, a huge number of (e.g., 1.5×1027) solution samples are estimated from the available small-size (e.g., 500) solution samples via a low-rank and tensor-recovery method. Numerical results show that our algorithm can easily extend the applicability of tensor-product stochastic collocation to IC and MEMS problems with over 50 random parameters, whereas the traditional algorithm can only handle several random parameters.
Statistical modeling with the virtual source MOSFET model A statistical extension of the ultra-compact Virtual Source (VS) MOSFET model is developed here for the first time. The characterization uses a statistical extraction technique based on the backward propagation of variance (BPV) with variability parameters derived directly from the nominal VS model. The resulting statistical VS model is extensively validated using Monte Carlo simulations, and the statistical distributions of several figures of merit for logic and memory cells are compared with those of a BSIM model from a 40-nm CMOS industrial design kit. The comparisons show almost identical distributions with distinct run time advantages for the statistical VS model. Additional simulations show that the statistical VS model accurately captures non-Gaussian features that are important for low-power designs.
Statistical Timing Analysis: From Basic Principles to State of the Art Static-timing analysis (STA) has been one of the most pervasive and successful analysis engines in the design of digital circuits for the last 20 years. However, in recent years, the increased loss of predictability in semiconductor devices has raised concern over the ability of STA to effectively model statistical variations. This has resulted in extensive research in the so-called statistical STA (SSTA), which marks a significant departure from the traditional STA framework. In this paper, we review the recent developments in SSTA. We first discuss its underlying models and assumptions, then survey the major approaches, and close by discussing its remaining key challenges.
Efficient methods for simulating highly nonlinear multi-rate circuits Widely-separated time scales appear in many electronic circuits, making traditional analysis difficult or impossible if the circuits are highly nonlinear. In this paper, an analyticalformulation and numerical methods are presented for treating strongly nonlinear multi-rate circuits effectively. Multivariate functions in the time domain are used to capturewidely separated rates efficiently, and a special partial differential equation (the MPDE) is shown to relate the multivariate forms of a circuit's signals. Time-domain and mixedfrequency-time simulation algorithms are presented for solving the MPDE. The new methods can analyze circuits that are both large and strongly nonlinear. Compared to traditional techniques, speedups of more than two orders of magnitude, as well as improved accuracy, are obtained.
On the Smolyak Cubature Error for Analytic Functions this paper, the author has been informed that Gerstner andGriebel [4] rediscovered this method. For algorithmic details, we refer to theirpaper. The resulting Smolyak cubature formulae are denoted by Q
Data Separation by Sparse Representations Recently, sparsity has become a key concept in various areas of applied mathematics, computer science, and electrical engineering. One application of this novel methodology is the separation of data, which is composed of two (or more) morphologically distinct constituents. The key idea is to carefully select representation systems each providing sparse approximations of one of the components. Then the sparsest coefficient vector representing the data within the composed - and therefore highly redundant - representation system is computed by $\ell_1$ minimization or thresholding. This automatically enforces separation. This paper shall serve as an introduction to and a survey about this exciting area of research as well as a reference for the state-of-the-art of this research field. It will appear as a chapter in a book on "Compressed Sensing: Theory and Applications" edited by Yonina Eldar and Gitta Kutyniok.
Incremental sparse saliency detection By the guidance of attention, human visual system is able to locate objects of interest in complex scene. We propose a new visual saliency detection model for both image and video. Inspired by biological vision, saliency is defined locally. Lossy compression is adopted, where the saliency of a location is measured by the Incremental Coding Length(ICL). The ICL is computed by presenting the center patch as the sparsest linear representation of its surroundings. The final saliency map is generated by accumulating the coding length. The model is tested on both images and videos. The results indicate a reliable and robust saliency of our method.
Mesh denoising via L0 minimization We present an algorithm for denoising triangulated models based on L0 minimization. Our method maximizes the flat regions of the model and gradually removes noise while preserving sharp features. As part of this process, we build a discrete differential operator for arbitrary triangle meshes that is robust with respect to degenerate triangulations. We compare our method versus other anisotropic denoising algorithms and demonstrate that our method is more robust and produces good results even in the presence of high noise.
1.041036
0.02027
0.0147
0.013333
0.010439
0.00405
0.00054
0.000243
0.000051
0.000002
0
0
0
0
Co-Learning Bayesian Model Fusion: Efficient Performance Modeling of Analog and Mixed-Signal Circuits Using Side Information Efficient performance modeling of today's analog and mixed-signal (AMS) circuits is an important yet challenging task. In this paper, we propose a novel performance modeling algorithm that is referred to as Co-Learning Bayesian Model Fusion (CL-BMF). The key idea of CL-BMF is to take advantage of the additional information collected from simulation and/or measurement to reduce the performance modeling cost. Different from the traditional performance modeling approaches which focus on the prior information of model coefficients (i.e. the coefficient side information) only, CL-BMF takes advantage of another new form of prior knowledge: the performance side information. In particular, CL-BMF combines the coefficient side information, the performance side information and a small number of training samples through Bayesian inference based on a graphical model. Two circuit examples designed in a commercial 32nm SOI CMOS process demonstrate that CL-BMF achieves up to 5X speed-up over other state-of-the-art performance modeling techniques without surrendering any accuracy.
Efficient trimmed-sample Monte Carlo methodology and yield-aware design flow for analog circuits This paper proposes efficient trimmed-sample Monte Carlo (TSMC) methodology and novel yield-aware design flow for analog circuits. This approach focuses on "trimming simulation samples" to speedup MC analysis. The best possible yield and the worst performance are provided "before" MC simulations such that designers can stop MC analysis and start improving circuits earlier. Moreover, this work can combine with variance reduction techniques or low discrepancy sequences to reduce the MC simulation cost further. Using Latin Hypercube Sampling as an example, this approach gives 29x to 54x speedup over traditional MC analysis and the yield estimation errors are all smaller than 1%. For analog system designs, the proposed flow is still efficient for high-level MC analysis, as demonstrated by a PLL system.
Compact Model Parameter Extraction Using Bayesian Inference, Incomplete New Measurements, and Optimal Bias Selection In this paper, we propose a novel MOSFET parameter extraction method to enable early technology evaluation. The distinguishing feature of the proposed method is that it enables the extraction of MOSFET model parameters using limited and incomplete current-voltage measurements from on-chip monitor circuits. An important step in this method is the use of maximum-a-posteriori estimation where past measurements of transistors from various technologies are used to learn a prior distribution and its uncertainty matrix for the parameters of the target technology. The framework then utilizes Bayesian inference to facilitate extraction using a very small set of additional measurements. The proposed method is validated using various past technologies and post-silicon measurements for a commercial 28-nm process. The proposed extraction can be used to characterize the statistical variations of MOSFETs with the significant benefit that the restrictions imposed by the Backward Propagation of Variance (BPV) algorithm are relaxed. We also study the lower bound requirement for the number of transistor measurements needed to extract a full set of parameters for a compact model. Finally, we propose an efficient algorithm for selecting the optimal transistor biases by minimizing a cost function derived from information-theoretic concept of average marginal information gain.
Statistical Performance Modeling and Parametric Yield Estimation of MOS VLSI A major cost in statistical analysis occurs in repeated system simulation as system parameters are varied. To reduce this cost, the system performances are approximated by regression models in terms of critical system parameters. These models are then used to predict the performance variations and parametric yield. This paper presents a systematic and computationally efficient method for deriving regression models of MOS VLSI circuit performances that can be used to estimate the parametric yield. This method consists of four fundamental steps: simulation point selection, model fitting and validation, model improvement, and parametric yield estimation. An average mean-squared error criterion is used to select an optimal set of points in the design space for circuit simulations, and the adequacy of the fitted regression model is checked rigorously. It will be shown through examples that accurate statistical performance models and parametric yield estimate for MOS VLSI can be derived by using four or five critical device parameters and a small number of circuit simulations.
Finding Deterministic Solution From Underdetermined Equation: Large-Scale Performance Variability Modeling of Analog/RF Circuits The aggressive scaling of integrated circuit technology results in high-dimensional, strongly-nonlinear performance variability that cannot be efficiently captured by traditional modeling techniques. In this paper, we adapt a novel L0-norm regularization method to address this modeling challenge. Our goal is to solve a large number of (e.g., 104-106) model coefficients from a small set of (e.g., 102-103) sampling points without over-fitting. This is facilitated by exploiting the underlying sparsity of model coefficients. Namely, although numerous basis functions are needed to span the high-dimensional, strongly-nonlinear variation space, only a few of them play an important role for a given performance of interest. An efficient orthogonal matching pursuit (OMP) algorithm is applied to automatically select these important basis functions based on a limited number of simulation samples. Several circuit examples designed in a commercial 65 nm process demonstrate that OMP achieves up to 25× speedup compared to the traditional least-squares fitting method.
Why Quasi-Monte Carlo is Better Than Monte Carlo or Latin Hypercube Sampling for Statistical Circuit Analysis At the nanoscale, no circuit parameters are truly deterministic; most quantities of practical interest present themselves as probability distributions. Thus, Monte Carlo techniques comprise the strategy of choice for statistical circuit analysis. There are many challenges in applying these techniques efficiently: circuit size, nonlinearity, simulation time, and required accuracy often conspire to make Monte Carlo analysis expensive and slow. Are we-the integrated circuit community-alone in facing such problems? As it turns out, the answer is “no.” Problems in computational finance share many of these characteristics: high dimensionality, profound nonlinearity, stringent accuracy requirements, and expensive sample evaluation. We perform a detailed experimental study of how one celebrated technique from that domain-quasi-Monte Carlo (QMC) simulation-can be adapted effectively for fast statistical circuit analysis. In contrast to traditional pseudorandom Monte Carlo sampling, QMC uses a (shorter) sequence of deterministically chosen sample points. We perform rigorous comparisons with both Monte Carlo and Latin hypercube sampling across a set of digital and analog circuits, in 90 and 45 nm technologies, varying in size from 30 to 400 devices. We consistently see superior performance from QMC, giving 2× to 8× speedup over conventional Monte Carlo for roughly 1% accuracy levels. We present rigorous theoretical arguments that support and explain this superior performance of QMC. The arguments also reveal insights regarding the (low) latent dimensionality of these circuit problems; for example, we observe that over half of the variance in our test circuits is from unidimensional behavior. This analysis provides quantitative support for recent enthusiasm in dimensionality reduction of circuit problems.
The impact of intrinsic device fluctuations on CMOS SRAM cell stability Reductions in CMOS SRAM cell static noise margin (SNM) due to intrinsic threshold voltage fluctuations in uniformly doped minimum-geometry cell MOSFETs are investigated for the first time using compact physical and stochastic models. Six sigma deviations in SNM due to intrinsic fluctuations alone are projected to exceed the nominal SMM for sub-100-nm CMOS technology generations. These large deviations pose severe barriers to scaling of supply voltage, channel length, and transistor count for conventional 6T SRAM-dominated CMOS ASICs and microprocessors
Counter braids: a novel counter architecture for per-flow measurement Fine-grained network measurement requires routers and switches to update large arrays of counters at very high link speed (e.g. 40 Gbps). A naive algorithm needs an infeasible amount of SRAM to store both the counters and a flow-to-counter association rule, so that arriving packets can update corresponding counters at link speed. This has made accurate per-flow measurement complex and expensive, and motivated approximate methods that detect and measure only the large flows. This paper revisits the problem of accurate per-flow measurement. We present a counter architecture, called Counter Braids, inspired by sparse random graph codes. In a nutshell, Counter Braids "compresses while counting". It solves the central problems (counter space and flow-to-counter association) of per-flow measurement by "braiding" a hierarchy of counters with random graphs. Braiding results in drastic space reduction by sharing counters among flows; and using random graphs generated on-the-fly with hash functions avoids the storage of flow-to-counter association. The Counter Braids architecture is optimal (albeit with a complex decoder) as it achieves the maximum compression rate asymptotically. For implementation, we present a low-complexity message passing decoding algorithm, which can recover flow sizes with essentially zero error. Evaluation on Internet traces demonstrates that almost all flow sizes are recovered exactly with only a few bits of counter space per flow.
Learning with dynamic group sparsity This paper investigates a new learning formulation called dynamic group sparsity. It is a natural extension of the standard sparsity concept in compressive sensing, and is motivated by the observation that in some practical sparse data the nonzero coefficients are often not random but tend to be clustered. Intuitively, better results can be achieved in these cases by reasonably utilizing both clustering and sparsity priors. Motivated by this idea, we have developed a new greedy sparse recovery algorithm, which prunes data residues in the iterative process according to both sparsity and group clustering priors rather than only sparsity as in previous methods. The proposed algorithm can recover stably sparse data with clustering trends using far fewer measurements and computations than current state-of-the-art algorithms with provable guarantees. Moreover, our algorithm can adaptively learn the dynamic group structure and the sparsity number if they are not available in the practical applications. We have applied the algorithm to sparse recovery and background subtraction in videos. Numerous experiments with improved performance over previous methods further validate our theoretical proofs and the effectiveness of the proposed algorithm.
Aging analysis at gate and macro cell level Aging, which can be regarded as a time-dependent variability, has until recently not received much attention in the field of electronic design automation. This is changing because increasing reliability costs threaten the continued scaling of ICs. We investigate the impact of aging effects on single combinatorial gates and present methods that help to reduce the reliability costs by accurately analyzing the performance degradation of aged circuits at gate and macro cell level.
Compound Linguistic Scale. •Compound Linguistic Scale comprises Compound Linguistic Variable, Fuzzy Normal Distribution and Deductive Rating Strategy.•CLV can produce two dimensional options, i.e. compound linguistic terms, to better reflect the raters’ preferences.•DRS is a double step rating approach for a rater to choose a compound linguistic term among two dimensional options.•FND can efficiently produce a population of fuzzy numbers for a linguistic term set with using a few parameters.•CLS, as a rating interface, can be contributed to various application domains in engineer and social sciences.
Fundamentals Of Clinical Methodology: 2. Etiology The concept of etiology is analyzed and the possibilities and limitations of deterministic, probabilistic, and fuzzy etiology are explored. Different kinds of formal structures for the relation of causation are introduced which enable us to explicate the notion of cause on qualitative, comparative, and quantitative levels. The conceptual framework developed is an approach to a theory of causality that may be useful in etiologic research, in building nosological systems, and in differential diagnosis, therapeutic decision-making, and controlled clinical trials. The bearings of the theory are exemplified by examining the current Chlamydia pneumoniae hypothesis on the incidence of myocardial infarction. (C) 1998 Elsevier Science B.V. All rights reserved.
Dominance-based fuzzy rough set analysis of uncertain and possibilistic data tables In this paper, we propose a dominance-based fuzzy rough set approach for the decision analysis of a preference-ordered uncertain or possibilistic data table, which is comprised of a finite set of objects described by a finite set of criteria. The domains of the criteria may have ordinal properties that express preference scales. In the proposed approach, we first compute the degree of dominance between any two objects based on their imprecise evaluations with respect to each criterion. This results in a valued dominance relation on the universe. Then, we define the degree of adherence to the dominance principle by every pair of objects and the degree of consistency of each object. The consistency degrees of all objects are aggregated to derive the quality of the classification, which we use to define the reducts of a data table. In addition, the upward and downward unions of decision classes are fuzzy subsets of the universe. Thus, the lower and upper approximations of the decision classes based on the valued dominance relation are fuzzy rough sets. By using the lower approximations of the decision classes, we can derive two types of decision rules that can be applied to new decision cases.
The laws of large numbers for fuzzy random variables The new attempt of weak and strong law of large numbers for fuzzy random variables is discussed in this paper by proposing the convergence in probability and convergence with probability one for fuzzy random variables. We first consider the limit properties of fuzzy numbers by invoking the Hausdorff metric, and then we extend it to the convergence in probability and convergence with probability one for fuzzy random variables. We provide the notion of weak and strong convergence in probability and weak and strong convergence with probability one for fuzzy random variables. Finally we come up with the weak and strong law of large numbers for fuzzy random variables in weak and strong sense. (C) 2000 Elsevier Science B.V. All rights reserved.
1.071111
0.08
0.08
0.020081
0.008889
0.00206
0.00007
0
0
0
0
0
0
0
Efficient sampling of sparse wideband analog signals Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The current paper proposes an alternative sampling stage that does not require a full-band front end. Instead, signals are captured with an analog front end that consists of a bank of multipliers and lowpass filters whose cutoff is much lower than the Nyquist rate. The problem of recovering the original signal from the low-rate samples can be studied within the framework of compressive sampling. An appropriate parameter selection ensures that the samples uniquely determine the analog input. Moreover, the analog input can be stably reconstructed with digital algorithms. Numerical experiments support the theoretical analysis.
Uncertainty relations for shift-invariant analog signals The past several years have witnessed a surge of research investigating various aspects of sparse representations and compressed sensing. Most of this work has focused on the finite-dimensional setting in which the goal is to decompose a finite-length vector into a given finite dictionary. Underlying many of these results is the conceptual notion of an uncertainty principle: a signal cannot be sparsely represented in two different bases. Here, we extend these ideas and results to the analog, infinite-dimensional setting by considering signals that lie in a finitely generated shift-invariant (SI) space. This class of signals is rich enough to include many interesting special cases such as multiband signals and splines. By adapting the notion of coherence defined for finite dictionaries to infinite SI representations, we develop an uncertainty principle similar in spirit to its finite counterpart. We demonstrate tightness of our bound by considering a bandlimited lowpass train that achieves the uncertainty principle. Building upon these results and similar work in the finite setting, we show how to find a sparse decomposition in an overcomplete dictionary by solving a convex optimization problem. The distinguishing feature of our approach is the fact that even though the problem is defined over an infinite domain with infinitely many variables and constraints, under certain conditions on the dictionary spectrum our algorithm can find the sparsest representation by solving a finite-dimensional problem.
Uncertainty Relations for Analog Signals In the past several years there has been a surge of research investigating various aspects of sparse representations and compressed sensing. Most of this work has focused on the fi nite-dimensional setting in which the goal is to decompose a finite-length vector into a given finite dictiona ry. Underlying many of these results is the conceptual notion of an uncertainty principle: a signal cannot be spars ely represented in two different bases. Here, we extend these ideas and results to the analog, infinite-dimensional setting by considering signals that lie in a finitely- generated shift-invariant (SI) space. This class of signal s is rich enough to include many interesting special cases such as multiband signals and splines. By adapting the notion of coherence defined for finite dictionaries to infinite SI representations, we develop an uncertainty principle si milar in spirit to its finite counterpart. We demonstrate tightness of our bound by considering a bandlimited low-pass comb that achieves the uncertainty principle. Building upon these results and similar work in the finite setting, we s how how to find a sparse decomposition in an overcomplete dictionary by solving a convex optimization problem. The distinguishing feature of our approach is the fact that even though the problem is defined over an infin ite domain with infinitely many variables and constraints, under certain conditions on the dictionary sp ectrum our algorithm can find the sparsest representation by solving a finite dimensional problem.
Beyond Nyquist: efficient sampling of sparse bandlimited signals Wideband analog signals push contemporary analogto-digital conversion (ADC) systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the band limit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its band limit in hertz. Simulations suggest that the random demodulator requires just O(Klog(W/K) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W hertz. In contrast to Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.
Toeplitz compressed sensing matrices with applications to sparse channel estimation Compressed sensing (CS) has recently emerged as a powerful signal acquisition paradigm. In essence, CS enables the recovery of high-dimensional sparse signals from relatively few linear observations in the form of projections onto a collection of test vectors. Existing results show that if the entries of the test vectors are independent realizations of certain zero-mean random variables, then with high probability the unknown signals can be recovered by solving a tractable convex optimization. This work extends CS theory to settings where the entries of the test vectors exhibit structured statistical dependencies. It follows that CS can be effectively utilized in linear, time-invariant system identification problems provided the impulse response of the system is (approximately or exactly) sparse. An immediate application is in wireless multipath channel estimation. It is shown here that time-domain probing of a multipath channel with a random binary sequence, along with utilization of CS reconstruction techniques, can provide significant improvements in estimation accuracy compared to traditional least-squares based linear channel estimation strategies. Abstract extensions of the main results are also discussed, where the theory of equitable graph coloring is employed to establish the utility of CS in settings where the test vectors exhibit more general statistical dependencies.
Robust recovery of signals from a structured union of subspaces Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which of ten guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper, we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modeled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose nonzero elements appear in fixed blocks. We then propose a mixed l2/l1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP, we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.
A Theory for Sampling Signals From a Union of Subspaces One of the fundamental assumptions in traditional sampling theorems is that the signals to be sampled come from a single vector space (e.g., bandlimited functions). However, in many cases of practical interest the sampled signals actually live in a union of subspaces. Examples include piecewise polynomials, sparse representations, nonuniform splines, signals with unknown spectral support, overlapping echoes with unknown delay and amplitude, and so on. For these signals, traditional sampling schemes based on the single subspace assumption can be either inapplicable or highly inefficient. In this paper, we study a general sampling framework where sampled signals come from a known union of subspaces and the sampling operator is linear. Geometrically, the sampling operator can be viewed as projecting sampled signals into a lower dimensional space, while still preserving all the information. We derive necessary and sufficient conditions for invertible and stable sampling operators in this framework and show that these conditions are applicable in many cases. Furthermore, we find the minimum sampling requirements for several classes of signals, which indicates the power of the framework. The results in this paper can serve as a guideline for designing new algorithms for various applications in signal processing and inverse problems.
Blind Compressed Sensing The fundamental principle underlying compressed sensing is that a signal, which is sparse under some basis representation, can be recovered from a small number of linear measurements. However, prior knowledge of the sparsity basis is essential for the recovery process. This work introduces the concept of blind compressed sensing, which avoids the need to know the sparsity basis in both the sampling and the recovery process. We suggest three possible constraints on the sparsity basis that can be added to the problem in order to guarantee a unique solution. For each constraint, we prove conditions for uniqueness, and suggest a simple method to retrieve the solution. We demonstrate through simulations that our methods can achieve results similar to those of standard compressed sensing, which rely on prior knowledge of the sparsity basis, as long as the signals are sparse enough. This offers a general sampling and reconstruction system that fits all sparse signals, regardless of the sparsity basis, under the conditions and constraints presented in this work.
Expression-Insensitive 3d Face Recognition Using Sparse Representation We present a face recognition method based on sparse representation for recognizing 3D face meshes under expressions using low-level geometric features. First, to enable the application of the sparse representation framework, we develop a uniform remeshing scheme to establish a consistent sampling pattern across 3D faces. To handle facial expressions, we design a feature pooling and ranking scheme to collect various types of low-level geometric features and rank them according to their sensitivities to facial expressions. By simply applying the sparse representation framework to the collected low-level features, our proposed method already achieves satisfactory recognition rates, which demonstrates the efficacy of the framework for 3D face recognition. To further improve results in the presence of severe facial expressions, we show that by choosing higher-ranked, i.e., expression-insensitive, features, the recognition rates approach those for neutral faces, without requiring an extensive set of reference faces for each individual to cover possible variations caused by expressions as proposed in previous work We apply our face recognition method to the GavabDB and FRGC 2.0 databases and demonstrate encouraging results.
Stochastic Sparse-grid Collocation Algorithm (SSCA) for Periodic Steady-State Analysis of Nonlinear System with Process Variations In this paper, stochastic collocation algorithm combined with sparse grid technique (SSCA) is proposed to deal with the periodic steady-state analysis for nonlinear systems with process variations. Compared to the existing approaches, SSCA has several considerable merits. Firstly, compared with the moment-matching parameterized model order reduction (PMOR), which equally treats the circuit response on process variables and frequency parameter by Taylor approximation, SSCA employs homogeneous chaos to capture the impact of process variations with exponential convergence rate and adopts Fourier series or wavelet bases to model the steady-state behavior in time domain. Secondly, contrary to stochastic Galerkin algorithm (SGA), which is efficient for stochastic linear system analysis, the complexity of SSCA is much smaller than that of SGA for nonlinear case. Thirdly, different from efficient collocation method, the heuristic approach which may results in "rank deficient problem" and "Runge phenomenon", sparse grid technique is developed to select the collocation points in SSCA in order to reduce the complexity while guaranteing the approximation accuracy. Furthermore, though SSCA is proposed for the stochastic nonlinear steady-state analysis, it can be applied for any other kinds of nonlinear system simulation with process variations, such as transient analysis, etc.
Digital Circuit Design Challenges and Opportunities in the Era of Nanoscale CMOS Well-designed circuits are one key ldquoinsulatingrdquo layer between the increasingly unruly behavior of scaled complementary metal-oxide-semiconductor devices and the systems we seek to construct from them. As we move forward into the nanoscale regime, circuit design is burdened to ldquohiderdquo more of the problems intrinsic to deeply scaled devices. How this is being accomplished is the subje...
Residual implications on the set of discrete fuzzy numbers In this paper residual implications defined on the set of discrete fuzzy numbers whose support is a set of consecutive natural numbers are studied. A specific construction of these implications is given and some examples are presented showing in particular that such a construction generalizes the case of interval-valued residual implications. The most usual properties for these operations are investigated leading to a residuated lattice structure on the set of discrete fuzzy numbers, that in general is not an MTL-algebra.
Management Of Uncertainty And Spatio-Temporal Aspects For Monitoring And Diagnosis In A Smart Home The health system in developed countries is facing a problem of scalability in order to accommodate the increased proportion of the elderly population. Scarce resources cannot be sustained unless innovative technology is considered to provide health care in a more effective way. The Smart Home provides preventive and assistive technology to vulnerable sectors of the population. Much research and development has been focused on the technological side (e.g., sensors and networks) but less effort has been invested in the capability of the Smart Home to intelligently monitor situations of interest and act in the best interest of the occupants. In this article we model a Smart Home scenario, using knowledge in the form of Event-Condition-Action rules together with a new inference scheme which incorporates spatio-temporal reasoning and uncertainty. A reasoning system called RIMER, has been extended to permit the monitoring of situations according to the place where they occur and the specific order and duration of the activities. The system allows for the specification of uncertainty both in terms of knowledge representation and credibility of the conclusions that can be achieved in terms of the evidence available.
Mesh denoising via L0 minimization We present an algorithm for denoising triangulated models based on L0 minimization. Our method maximizes the flat regions of the model and gradually removes noise while preserving sharp features. As part of this process, we build a discrete differential operator for arbitrary triangle meshes that is robust with respect to degenerate triangulations. We compare our method versus other anisotropic denoising algorithms and demonstrate that our method is more robust and produces good results even in the presence of high noise.
1.101685
0.023411
0.02331
0.01418
0.012083
0.005046
0.000489
0.0001
0.000013
0
0
0
0
0